This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-06 20:10
Elapsed48m31s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 623 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 137 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-lcwcec-kubeconfig; do sleep 1; done"
capz-lcwcec-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-lcwcec-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-lcwcec-control-plane-jfxgf   NotReady   <none>   1s    v1.22.14-rc.0.5+710e88673218ed
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-lcwcec-control-plane-jfxgf condition met
node/capz-lcwcec-md-0-jzv54 condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:28:35.706: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-khzgk" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  6 20:28:35.745: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 39.048877ms
Sep  6 20:28:37.783: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077218799s
Sep  6 20:28:39.819: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113670292s
Sep  6 20:28:41.857: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.151135429s
Sep  6 20:28:43.894: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.188291089s
Sep  6 20:28:45.930: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224249696s
Sep  6 20:28:47.972: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.266615549s
Sep  6 20:28:50.010: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.304543171s
Sep  6 20:28:52.050: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.34476447s
Sep  6 20:28:54.090: INFO: Pod "azuredisk-volume-tester-khzgk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.384752669s
STEP: Saw pod success
Sep  6 20:28:54.091: INFO: Pod "azuredisk-volume-tester-khzgk" satisfied condition "Succeeded or Failed"
Sep  6 20:28:54.091: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-khzgk"
Sep  6 20:28:54.140: INFO: Pod azuredisk-volume-tester-khzgk has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-khzgk in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:28:54.257: INFO: deleting PVC "azuredisk-8081"/"pvc-cxbbh"
Sep  6 20:28:54.257: INFO: Deleting PersistentVolumeClaim "pvc-cxbbh"
STEP: waiting for claim's PV "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" to be deleted
Sep  6 20:28:54.296: INFO: Waiting up to 10m0s for PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 to get deleted
Sep  6 20:28:54.331: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 found and phase=Released (34.722601ms)
Sep  6 20:28:59.367: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 found and phase=Failed (5.071458605s)
Sep  6 20:29:04.406: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 found and phase=Failed (10.109753217s)
Sep  6 20:29:09.443: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 found and phase=Failed (15.147502568s)
Sep  6 20:29:14.480: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 found and phase=Failed (20.183741427s)
Sep  6 20:29:19.519: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 found and phase=Failed (25.223025891s)
Sep  6 20:29:24.555: INFO: PersistentVolume pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 was removed
Sep  6 20:29:24.555: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  6 20:29:24.591: INFO: Claim "azuredisk-8081" in namespace "pvc-cxbbh" doesn't exist in the system
Sep  6 20:29:24.591: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-gkwwq
Sep  6 20:29:24.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  6 20:29:42.919: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-lcq44"
Sep  6 20:29:42.964: INFO: Error getting logs for pod azuredisk-volume-tester-lcq44: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-lcq44)
STEP: Deleting pod azuredisk-volume-tester-lcq44 in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:29:43.072: INFO: deleting PVC "azuredisk-5466"/"pvc-mc9qt"
Sep  6 20:29:43.072: INFO: Deleting PersistentVolumeClaim "pvc-mc9qt"
STEP: waiting for claim's PV "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" to be deleted
... skipping 18 lines ...
Sep  6 20:31:08.792: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Bound (1m25.681987939s)
Sep  6 20:31:13.827: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Bound (1m30.717592623s)
Sep  6 20:31:18.868: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Bound (1m35.757707684s)
Sep  6 20:31:23.904: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Bound (1m40.794171707s)
Sep  6 20:31:28.941: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Bound (1m45.831286654s)
Sep  6 20:31:33.977: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Bound (1m50.867545274s)
Sep  6 20:31:39.013: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Failed (1m55.902968532s)
Sep  6 20:31:44.050: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Failed (2m0.939874341s)
Sep  6 20:31:49.087: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Failed (2m5.977250141s)
Sep  6 20:31:54.124: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Failed (2m11.014125742s)
Sep  6 20:31:59.163: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Failed (2m16.053251822s)
Sep  6 20:32:04.200: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 found and phase=Failed (2m21.089893181s)
Sep  6 20:32:09.239: INFO: PersistentVolume pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 was removed
Sep  6 20:32:09.239: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  6 20:32:09.274: INFO: Claim "azuredisk-5466" in namespace "pvc-mc9qt" doesn't exist in the system
Sep  6 20:32:09.274: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-2gkmc
Sep  6 20:32:09.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:32:10.093: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9h6kj" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  6 20:32:10.128: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 35.197273ms
Sep  6 20:32:12.164: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071564355s
Sep  6 20:32:14.204: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111303366s
Sep  6 20:32:16.241: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148025719s
Sep  6 20:32:18.277: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184438097s
Sep  6 20:32:20.313: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22044655s
Sep  6 20:32:22.350: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 12.257479244s
Sep  6 20:32:24.388: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 14.294725395s
Sep  6 20:32:26.425: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.331932599s
Sep  6 20:32:28.462: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Pending", Reason="", readiness=false. Elapsed: 18.369558192s
Sep  6 20:32:30.500: INFO: Pod "azuredisk-volume-tester-9h6kj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.407314008s
STEP: Saw pod success
Sep  6 20:32:30.500: INFO: Pod "azuredisk-volume-tester-9h6kj" satisfied condition "Succeeded or Failed"
Sep  6 20:32:30.500: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-9h6kj"
Sep  6 20:32:30.544: INFO: Pod azuredisk-volume-tester-9h6kj has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-9h6kj in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:32:30.665: INFO: deleting PVC "azuredisk-2790"/"pvc-kxn55"
Sep  6 20:32:30.665: INFO: Deleting PersistentVolumeClaim "pvc-kxn55"
STEP: waiting for claim's PV "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" to be deleted
Sep  6 20:32:30.708: INFO: Waiting up to 10m0s for PersistentVolume pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 to get deleted
Sep  6 20:32:30.747: INFO: PersistentVolume pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 found and phase=Failed (38.79962ms)
Sep  6 20:32:35.786: INFO: PersistentVolume pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 found and phase=Failed (5.077974216s)
Sep  6 20:32:40.826: INFO: PersistentVolume pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 found and phase=Failed (10.117151969s)
Sep  6 20:32:45.866: INFO: PersistentVolume pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 found and phase=Failed (15.157285748s)
Sep  6 20:32:50.903: INFO: PersistentVolume pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 was removed
Sep  6 20:32:50.904: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  6 20:32:50.939: INFO: Claim "azuredisk-2790" in namespace "pvc-kxn55" doesn't exist in the system
Sep  6 20:32:50.940: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-kwl59
Sep  6 20:32:50.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  6 20:32:51.742: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-q7xr4" in namespace "azuredisk-5356" to be "Error status code"
Sep  6 20:32:51.778: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.191809ms
Sep  6 20:32:53.817: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074513438s
Sep  6 20:32:55.853: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110824731s
Sep  6 20:32:57.890: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147234474s
Sep  6 20:32:59.927: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184722441s
Sep  6 20:33:01.964: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221049566s
Sep  6 20:33:04.001: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.25856256s
Sep  6 20:33:06.039: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.29601676s
Sep  6 20:33:08.074: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.331814176s
Sep  6 20:33:10.111: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.368997955s
Sep  6 20:33:12.152: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.40987088s
Sep  6 20:33:14.193: INFO: Pod "azuredisk-volume-tester-q7xr4": Phase="Failed", Reason="", readiness=false. Elapsed: 22.450761863s
STEP: Saw pod failure
Sep  6 20:33:14.193: INFO: Pod "azuredisk-volume-tester-q7xr4" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  6 20:33:14.236: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-q7xr4"
Sep  6 20:33:14.273: INFO: Pod azuredisk-volume-tester-q7xr4 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-q7xr4 in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:33:14.388: INFO: deleting PVC "azuredisk-5356"/"pvc-zmh7b"
Sep  6 20:33:14.388: INFO: Deleting PersistentVolumeClaim "pvc-zmh7b"
STEP: waiting for claim's PV "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" to be deleted
Sep  6 20:33:14.425: INFO: Waiting up to 10m0s for PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 to get deleted
Sep  6 20:33:14.460: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Released (35.194078ms)
Sep  6 20:33:19.500: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (5.074811225s)
Sep  6 20:33:24.538: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (10.113605574s)
Sep  6 20:33:29.575: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (15.15059084s)
Sep  6 20:33:34.616: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (20.190948769s)
Sep  6 20:33:39.655: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (25.230206791s)
Sep  6 20:33:44.696: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (30.270755213s)
Sep  6 20:33:49.732: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 found and phase=Failed (35.307223979s)
Sep  6 20:33:54.770: INFO: PersistentVolume pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 was removed
Sep  6 20:33:54.770: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  6 20:33:54.805: INFO: Claim "azuredisk-5356" in namespace "pvc-zmh7b" doesn't exist in the system
Sep  6 20:33:54.805: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-l244r
Sep  6 20:33:54.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 53 lines ...
Sep  6 20:34:55.346: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Bound (5.069932614s)
Sep  6 20:35:00.384: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Bound (10.10741242s)
Sep  6 20:35:05.424: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Bound (15.14804757s)
Sep  6 20:35:10.463: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Bound (20.186709695s)
Sep  6 20:35:15.502: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Bound (25.225473195s)
Sep  6 20:35:20.537: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Bound (30.261264592s)
Sep  6 20:35:25.576: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Failed (35.300321215s)
Sep  6 20:35:30.613: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Failed (40.336985117s)
Sep  6 20:35:35.651: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Failed (45.374511289s)
Sep  6 20:35:40.687: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Failed (50.410616675s)
Sep  6 20:35:45.723: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 found and phase=Failed (55.446697265s)
Sep  6 20:35:50.758: INFO: PersistentVolume pvc-2fdef245-5695-4274-98cb-008e1afa81f9 was removed
Sep  6 20:35:50.758: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  6 20:35:50.793: INFO: Claim "azuredisk-5194" in namespace "pvc-5c87b" doesn't exist in the system
Sep  6 20:35:50.793: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-bhwgl
Sep  6 20:35:50.829: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-zd49j"
Sep  6 20:35:50.866: INFO: Pod azuredisk-volume-tester-zd49j has the following logs: 
... skipping 8 lines ...
Sep  6 20:35:56.082: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Bound (5.071942518s)
Sep  6 20:36:01.119: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Bound (10.109072918s)
Sep  6 20:36:06.157: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Bound (15.147737895s)
Sep  6 20:36:11.194: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Bound (20.183816325s)
Sep  6 20:36:16.230: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Bound (25.220373609s)
Sep  6 20:36:21.267: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Bound (30.25731797s)
Sep  6 20:36:26.303: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Failed (35.292983602s)
Sep  6 20:36:31.338: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Failed (40.328768109s)
Sep  6 20:36:36.374: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Failed (45.364428618s)
Sep  6 20:36:41.414: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Failed (50.404251569s)
Sep  6 20:36:46.452: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 found and phase=Failed (55.442066834s)
Sep  6 20:36:51.487: INFO: PersistentVolume pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 was removed
Sep  6 20:36:51.487: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  6 20:36:51.521: INFO: Claim "azuredisk-5194" in namespace "pvc-jtkbc" doesn't exist in the system
Sep  6 20:36:51.521: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-fpklz
Sep  6 20:36:51.565: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-x9fp7"
Sep  6 20:36:51.615: INFO: Pod azuredisk-volume-tester-x9fp7 has the following logs: 
... skipping 8 lines ...
Sep  6 20:36:56.829: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Bound (5.071112161s)
Sep  6 20:37:01.867: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Bound (10.108450838s)
Sep  6 20:37:06.902: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Bound (15.143591019s)
Sep  6 20:37:11.941: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Bound (20.182843806s)
Sep  6 20:37:16.980: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Bound (25.221920788s)
Sep  6 20:37:22.019: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Bound (30.261164223s)
Sep  6 20:37:27.055: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (35.296964566s)
Sep  6 20:37:32.091: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (40.33312252s)
Sep  6 20:37:37.131: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (45.372645469s)
Sep  6 20:37:42.170: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (50.411679966s)
Sep  6 20:37:47.209: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (55.451101957s)
Sep  6 20:37:52.245: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (1m0.486894385s)
Sep  6 20:37:57.280: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (1m5.522320353s)
Sep  6 20:38:02.319: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 found and phase=Failed (1m10.56099539s)
Sep  6 20:38:07.361: INFO: PersistentVolume pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 was removed
Sep  6 20:38:07.361: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  6 20:38:07.395: INFO: Claim "azuredisk-5194" in namespace "pvc-45vbn" doesn't exist in the system
Sep  6 20:38:07.395: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-tr729
Sep  6 20:38:07.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 58 lines ...
Sep  6 20:39:35.136: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Bound (5.071233806s)
Sep  6 20:39:40.177: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Bound (10.111938745s)
Sep  6 20:39:45.212: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Bound (15.147555829s)
Sep  6 20:39:50.252: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Bound (20.18743013s)
Sep  6 20:39:55.287: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Bound (25.222666406s)
Sep  6 20:40:00.324: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Bound (30.259075372s)
Sep  6 20:40:05.360: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Failed (35.295365337s)
Sep  6 20:40:10.399: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Failed (40.334174336s)
Sep  6 20:40:15.433: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Failed (45.368829155s)
Sep  6 20:40:20.469: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Failed (50.404839657s)
Sep  6 20:40:25.507: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Failed (55.442207162s)
Sep  6 20:40:30.542: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 found and phase=Failed (1m0.477136864s)
Sep  6 20:40:35.581: INFO: PersistentVolume pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 was removed
Sep  6 20:40:35.581: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  6 20:40:35.616: INFO: Claim "azuredisk-1353" in namespace "pvc-vx77l" doesn't exist in the system
Sep  6 20:40:35.616: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-54f6c
Sep  6 20:40:35.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:40:53.274: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-bcg46" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  6 20:40:53.314: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 39.886042ms
Sep  6 20:40:55.350: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076025361s
Sep  6 20:40:57.392: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118332898s
Sep  6 20:40:59.431: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156483246s
Sep  6 20:41:01.469: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.19471596s
Sep  6 20:41:03.508: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.233533824s
... skipping 10 lines ...
Sep  6 20:41:25.929: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 32.655014169s
Sep  6 20:41:27.967: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 34.693329437s
Sep  6 20:41:30.004: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 36.730083749s
Sep  6 20:41:32.041: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Pending", Reason="", readiness=false. Elapsed: 38.766908008s
Sep  6 20:41:34.079: INFO: Pod "azuredisk-volume-tester-bcg46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.805186048s
STEP: Saw pod success
Sep  6 20:41:34.079: INFO: Pod "azuredisk-volume-tester-bcg46" satisfied condition "Succeeded or Failed"
Sep  6 20:41:34.079: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-bcg46"
Sep  6 20:41:34.123: INFO: Pod azuredisk-volume-tester-bcg46 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-bcg46 in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:41:34.236: INFO: deleting PVC "azuredisk-59"/"pvc-79h52"
Sep  6 20:41:34.236: INFO: Deleting PersistentVolumeClaim "pvc-79h52"
STEP: waiting for claim's PV "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" to be deleted
Sep  6 20:41:34.272: INFO: Waiting up to 10m0s for PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 to get deleted
Sep  6 20:41:34.307: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Released (34.239826ms)
Sep  6 20:41:39.342: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (5.069724784s)
Sep  6 20:41:44.377: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (10.105039464s)
Sep  6 20:41:49.415: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (15.14256093s)
Sep  6 20:41:54.454: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (20.182052117s)
Sep  6 20:41:59.490: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (25.217714353s)
Sep  6 20:42:04.528: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (30.255434177s)
Sep  6 20:42:09.566: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (35.29368539s)
Sep  6 20:42:14.604: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (40.331543678s)
Sep  6 20:42:19.642: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 found and phase=Failed (45.370057282s)
Sep  6 20:42:24.679: INFO: PersistentVolume pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 was removed
Sep  6 20:42:24.679: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  6 20:42:24.713: INFO: Claim "azuredisk-59" in namespace "pvc-79h52" doesn't exist in the system
Sep  6 20:42:24.713: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-4tq5l
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:42:46.120: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-b8rww" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  6 20:42:46.154: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 34.55354ms
Sep  6 20:42:48.189: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069399281s
Sep  6 20:42:50.226: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105950137s
Sep  6 20:42:52.261: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141579793s
Sep  6 20:42:54.298: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 8.178014889s
Sep  6 20:42:56.334: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 10.214171261s
... skipping 2 lines ...
Sep  6 20:43:02.444: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 16.323815869s
Sep  6 20:43:04.479: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 18.359356495s
Sep  6 20:43:06.515: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 20.395606835s
Sep  6 20:43:08.553: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Pending", Reason="", readiness=false. Elapsed: 22.433482446s
Sep  6 20:43:10.591: INFO: Pod "azuredisk-volume-tester-b8rww": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.470940236s
STEP: Saw pod success
Sep  6 20:43:10.591: INFO: Pod "azuredisk-volume-tester-b8rww" satisfied condition "Succeeded or Failed"
Sep  6 20:43:10.591: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-b8rww"
Sep  6 20:43:10.637: INFO: Pod azuredisk-volume-tester-b8rww has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.057600 seconds, 1.7GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  6 20:43:10.757: INFO: deleting PVC "azuredisk-2546"/"pvc-b7x95"
Sep  6 20:43:10.757: INFO: Deleting PersistentVolumeClaim "pvc-b7x95"
STEP: waiting for claim's PV "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" to be deleted
Sep  6 20:43:10.793: INFO: Waiting up to 10m0s for PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 to get deleted
Sep  6 20:43:10.826: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Released (33.862573ms)
Sep  6 20:43:15.865: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (5.072601234s)
Sep  6 20:43:20.903: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (10.109920375s)
Sep  6 20:43:25.941: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (15.148884254s)
Sep  6 20:43:30.979: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (20.185988599s)
Sep  6 20:43:36.014: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (25.221676262s)
Sep  6 20:43:41.050: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (30.257505648s)
Sep  6 20:43:46.086: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 found and phase=Failed (35.293540574s)
Sep  6 20:43:51.123: INFO: PersistentVolume pvc-1a9103ad-679b-4429-93dd-f322a37a2855 was removed
Sep  6 20:43:51.123: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  6 20:43:51.158: INFO: Claim "azuredisk-2546" in namespace "pvc-b7x95" doesn't exist in the system
Sep  6 20:43:51.158: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-qnrmx
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:44:03.507: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2kmvr" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  6 20:44:03.549: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 41.695085ms
Sep  6 20:44:05.585: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077712006s
Sep  6 20:44:07.623: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.115696018s
Sep  6 20:44:09.660: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152683128s
Sep  6 20:44:11.697: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.190087349s
Sep  6 20:44:13.737: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.229938609s
... skipping 8 lines ...
Sep  6 20:44:32.076: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 28.569260583s
Sep  6 20:44:34.113: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 30.606306869s
Sep  6 20:44:36.151: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 32.644078003s
Sep  6 20:44:38.189: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Pending", Reason="", readiness=false. Elapsed: 34.682099606s
Sep  6 20:44:40.226: INFO: Pod "azuredisk-volume-tester-2kmvr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.718995888s
STEP: Saw pod success
Sep  6 20:44:40.226: INFO: Pod "azuredisk-volume-tester-2kmvr" satisfied condition "Succeeded or Failed"
Sep  6 20:44:40.226: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-2kmvr"
Sep  6 20:44:40.263: INFO: Pod azuredisk-volume-tester-2kmvr has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-2kmvr in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:44:40.377: INFO: deleting PVC "azuredisk-8582"/"pvc-j8g6j"
Sep  6 20:44:40.377: INFO: Deleting PersistentVolumeClaim "pvc-j8g6j"
STEP: waiting for claim's PV "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" to be deleted
Sep  6 20:44:40.412: INFO: Waiting up to 10m0s for PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 to get deleted
Sep  6 20:44:40.447: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 found and phase=Released (34.699548ms)
Sep  6 20:44:45.486: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 found and phase=Failed (5.073190926s)
Sep  6 20:44:50.521: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 found and phase=Failed (10.108860567s)
Sep  6 20:44:55.558: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 found and phase=Failed (15.145487402s)
Sep  6 20:45:00.595: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 found and phase=Failed (20.18255884s)
Sep  6 20:45:05.633: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 found and phase=Failed (25.220123791s)
Sep  6 20:45:10.668: INFO: PersistentVolume pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051 was removed
Sep  6 20:45:10.668: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  6 20:45:10.702: INFO: Claim "azuredisk-8582" in namespace "pvc-j8g6j" doesn't exist in the system
Sep  6 20:45:10.702: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-rv7jl
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:45:10.807: INFO: deleting PVC "azuredisk-8582"/"pvc-sv5c5"
Sep  6 20:45:10.807: INFO: Deleting PersistentVolumeClaim "pvc-sv5c5"
STEP: waiting for claim's PV "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" to be deleted
Sep  6 20:45:10.843: INFO: Waiting up to 10m0s for PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 to get deleted
Sep  6 20:45:10.881: INFO: PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 found and phase=Failed (37.94071ms)
Sep  6 20:45:15.920: INFO: PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 found and phase=Failed (5.077178201s)
Sep  6 20:45:20.959: INFO: PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 found and phase=Failed (10.116436077s)
Sep  6 20:45:25.995: INFO: PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 found and phase=Failed (15.15201942s)
Sep  6 20:45:31.030: INFO: PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 found and phase=Failed (20.187056209s)
Sep  6 20:45:36.066: INFO: PersistentVolume pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 was removed
Sep  6 20:45:36.066: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  6 20:45:36.101: INFO: Claim "azuredisk-8582" in namespace "pvc-sv5c5" doesn't exist in the system
Sep  6 20:45:36.101: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-twxpd
STEP: validating provisioned PV
STEP: checking the PV
... skipping 394 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  6 20:48:47.891: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.326 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 247 lines ...
I0906 20:24:33.638334       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-09-06 20:17:29 +0000 UTC to 2032-09-03 20:22:29 +0000 UTC (now=2022-09-06 20:24:33.638310485 +0000 UTC))"
I0906 20:24:33.638722       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662495872\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662495872\" (2022-09-06 19:24:31 +0000 UTC to 2023-09-06 19:24:31 +0000 UTC (now=2022-09-06 20:24:33.638695496 +0000 UTC))"
I0906 20:24:33.639071       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662495873\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662495872\" (2022-09-06 19:24:32 +0000 UTC to 2023-09-06 19:24:32 +0000 UTC (now=2022-09-06 20:24:33.639046406 +0000 UTC))"
I0906 20:24:33.639238       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0906 20:24:33.639380       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0906 20:24:33.639986       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0906 20:24:35.377645       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0906 20:24:35.377912       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0906 20:24:38.458975       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0906 20:24:38.459741       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-lcwcec-control-plane-jfxgf_41d60f4a-612e-4869-bca6-0efd6fe6130e became leader"
I0906 20:24:38.724814       1 request.go:597] Waited for 97.788291ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/networking.k8s.io/v1?timeout=32s
I0906 20:24:38.775028       1 request.go:597] Waited for 148.041358ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1?timeout=32s
I0906 20:24:38.827090       1 request.go:597] Waited for 200.080352ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiregistration.k8s.io/v1?timeout=32s
I0906 20:24:38.874959       1 request.go:597] Waited for 247.944682ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/policy/v1?timeout=32s
... skipping 59 lines ...
I0906 20:24:40.231671       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0906 20:24:40.231443       1 reflector.go:219] Starting reflector *v1.Node (18h39m46.661235657s) from k8s.io/client-go/informers/factory.go:134
I0906 20:24:40.231901       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0906 20:24:40.231459       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0906 20:24:40.231660       1 reflector.go:219] Starting reflector *v1.Secret (18h39m46.661235657s) from k8s.io/client-go/informers/factory.go:134
I0906 20:24:40.232472       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0906 20:24:40.271643       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0906 20:24:40.271981       1 controllermanager.go:562] Starting "horizontalpodautoscaling"
I0906 20:24:40.333020       1 shared_informer.go:270] caches populated
I0906 20:24:40.333317       1 shared_informer.go:247] Caches are synced for tokens 
I0906 20:24:40.378191       1 controllermanager.go:577] Started "horizontalpodautoscaling"
I0906 20:24:40.378460       1 controllermanager.go:562] Starting "bootstrapsigner"
I0906 20:24:40.378900       1 horizontal.go:169] Starting HPA controller
... skipping 158 lines ...
I0906 20:24:42.385847       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0906 20:24:42.385939       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 20:24:42.386011       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0906 20:24:42.386051       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0906 20:24:42.386195       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0906 20:24:42.386286       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0906 20:24:42.386377       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 20:24:42.386446       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0906 20:24:42.386740       1 controllermanager.go:577] Started "attachdetach"
I0906 20:24:42.386809       1 controllermanager.go:562] Starting "ephemeral-volume"
I0906 20:24:42.386940       1 attach_detach_controller.go:328] Starting attach detach controller
I0906 20:24:42.387009       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0906 20:24:42.387188       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-control-plane-jfxgf"
W0906 20:24:42.387648       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-lcwcec-control-plane-jfxgf" does not exist
I0906 20:24:42.434706       1 request.go:597] Waited for 191.098872ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/secrets
I0906 20:24:42.484019       1 request.go:597] Waited for 145.642018ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/secrets
I0906 20:24:42.534072       1 request.go:597] Waited for 145.80022ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/attachdetach-controller
I0906 20:24:42.534712       1 controllermanager.go:577] Started "ephemeral-volume"
I0906 20:24:42.534736       1 controllermanager.go:562] Starting "endpointslicemirroring"
I0906 20:24:42.534906       1 controller.go:170] Starting ephemeral volume controller
... skipping 58 lines ...
I0906 20:24:43.385416       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0906 20:24:43.385432       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0906 20:24:43.385447       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0906 20:24:43.385470       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0906 20:24:43.385512       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0906 20:24:43.385527       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0906 20:24:43.385548       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 20:24:43.385559       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0906 20:24:43.387172       1 controllermanager.go:577] Started "persistentvolume-binder"
I0906 20:24:43.387214       1 controllermanager.go:562] Starting "resourcequota"
I0906 20:24:43.387264       1 pv_controller_base.go:308] Starting persistent volume controller
I0906 20:24:43.387331       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0906 20:24:43.440010       1 request.go:597] Waited for 149.917035ms due to client-side throttling, not priority and fairness, request: PUT:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/replicaset-controller
... skipping 73 lines ...
I0906 20:24:43.944275       1 graph_builder.go:273] garbage controller monitor not synced: no monitors
I0906 20:24:43.944315       1 graph_builder.go:289] GraphBuilder running
I0906 20:24:43.944207       1 controllermanager.go:577] Started "garbagecollector"
I0906 20:24:43.944602       1 controllermanager.go:562] Starting "cronjob"
I0906 20:24:44.031377       1 request.go:597] Waited for 86.600019ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/cronjob-controller
I0906 20:24:44.082065       1 request.go:597] Waited for 79.501443ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector/token
W0906 20:24:44.115520       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0906 20:24:44.115773       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta1, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []
I0906 20:24:44.115797       1 garbagecollector.go:219] reset restmapper
I0906 20:24:44.132039       1 request.go:597] Waited for 97.911584ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system
I0906 20:24:44.185664       1 controllermanager.go:577] Started "cronjob"
I0906 20:24:44.185975       1 controllermanager.go:562] Starting "disruption"
I0906 20:24:44.186228       1 cronjob_controllerv2.go:126] "Starting cronjob controller v2"
... skipping 520 lines ...
I0906 20:24:45.724223       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-06 20:24:45.680420758 +0000 UTC m=+14.122387854 - now: 2022-09-06 20:24:45.724214245 +0000 UTC m=+14.166181441]
I0906 20:24:45.728907       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-06 20:24:45.686995446 +0000 UTC m=+14.128962542 - now: 2022-09-06 20:24:45.728898708 +0000 UTC m=+14.170865904]
I0906 20:24:45.734118       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0906 20:24:45.737834       1 garbagecollector.go:522] object garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"capz-lcwcec-control-plane-jfxgf", UID:"171dda15-9b57-4b5f-a70c-a6b27350c711", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""} has at least one existing owner: []v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"capz-lcwcec-control-plane-jfxgf", UID:"bd2868f5-346f-4a11-81cf-86a82296a379", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}, will not garbage collect
I0906 20:24:45.742770       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 20:24:45.708021728 +0000 UTC m=+14.149988924 - now: 2022-09-06 20:24:45.74276204 +0000 UTC m=+14.184729236]
I0906 20:24:45.782480       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="857.200175ms"
I0906 20:24:45.782828       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:24:45.783061       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-06 20:24:45.783036447 +0000 UTC m=+14.225003643"
I0906 20:24:45.783872       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-06 20:24:45 +0000 UTC - now: 2022-09-06 20:24:45.783864173 +0000 UTC m=+14.225831369]
I0906 20:24:45.784378       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="858.525729ms"
I0906 20:24:45.785550       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:24:45.784701       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="858.364533ms"
I0906 20:24:45.785132       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (800.893587ms)
I0906 20:24:45.785472       1 disruption.go:391] update DB "calico-kube-controllers"
I0906 20:24:45.785976       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 20:24:45.785954141 +0000 UTC m=+14.227921337"
I0906 20:24:45.787205       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 20:24:45 +0000 UTC - now: 2022-09-06 20:24:45.787199082 +0000 UTC m=+14.229166278]
I0906 20:24:45.786228       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:24:45.787892       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 20:24:45.787828202 +0000 UTC m=+14.229795298"
I0906 20:24:45.788542       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-06 20:24:45 +0000 UTC - now: 2022-09-06 20:24:45.788535425 +0000 UTC m=+14.230502521]
I0906 20:24:45.790867       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (72.302µs)
I0906 20:24:45.794254       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0906 20:24:45.795711       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="12.66021ms"
I0906 20:24:45.795943       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-06 20:24:45.795900664 +0000 UTC m=+14.237867760"
... skipping 402 lines ...
I0906 20:25:09.216084       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be09094cdf7d54, ext:37657940300, loc:(*time.Location)(0x751a1a0)}}
I0906 20:25:09.216131       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be09094ce1dd28, ext:37658095904, loc:(*time.Location)(0x751a1a0)}}
I0906 20:25:09.216145       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0906 20:25:09.216182       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0906 20:25:09.216201       1 daemon_controller.go:1102] Updating daemon set status
I0906 20:25:09.216238       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (4.476911ms)
I0906 20:25:09.808960       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-lcwcec-control-plane-jfxgf transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 20:24:54 +0000 UTC,LastTransitionTime:2022-09-06 20:24:22 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 20:25:04 +0000 UTC,LastTransitionTime:2022-09-06 20:25:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 20:25:09.809079       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-control-plane-jfxgf ReadyCondition updated. Updating timestamp.
I0906 20:25:09.809109       1 node_lifecycle_controller.go:893] Node capz-lcwcec-control-plane-jfxgf is healthy again, removing all taints
I0906 20:25:09.809260       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0906 20:25:13.385447       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.802µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39276" resp=200
I0906 20:25:13.997489       1 daemon_controller.go:570] Pod calico-node-lzq7q updated.
I0906 20:25:13.997945       1 disruption.go:427] updatePod called on pod "calico-node-lzq7q"
... skipping 24 lines ...
I0906 20:25:14.727827       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:25:14.782250       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:25:14.891053       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:25:14.952228       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-control-plane-jfxgf"
E0906 20:25:15.033954       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0906 20:25:15.034020       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0906 20:25:15.692712       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0906 20:25:16.325817       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-cl7rkd" (16.101µs)
I0906 20:25:18.311492       1 replica_set.go:443] Pod coredns-78fcd69978-mkh47 updated, objectMeta {Name:coredns-78fcd69978-mkh47 GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:3c99bdc4-06fc-49b2-8b28-5a022902141b ResourceVersion:618 Generation:0 CreationTimestamp:2022-09-06 20:24:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:6f9c9a4a-53f8-4212-8d85-c046c7771a4f Controller:0xc000c5b5bf BlockOwnerDeletion:0xc000c5b5e0}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:24:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f9c9a4a-53f8-4212-8d85-c046c7771a4f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-06 20:24:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-06 20:25:05 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:coredns-78fcd69978-mkh47 GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:3c99bdc4-06fc-49b2-8b28-5a022902141b ResourceVersion:666 Generation:0 CreationTimestamp:2022-09-06 20:24:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[cni.projectcalico.org/containerID:0638203d9ded33194d9ce7adafed6ef6e0527e696425b0051bcc1302f4ab509c cni.projectcalico.org/podIP:192.168.201.65/32 cni.projectcalico.org/podIPs:192.168.201.65/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:6f9c9a4a-53f8-4212-8d85-c046c7771a4f Controller:0xc002521e37 BlockOwnerDeletion:0xc002521e38}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:24:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f9c9a4a-53f8-4212-8d85-c046c7771a4f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-06 20:24:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-06 20:25:05 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-06 20:25:18 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0906 20:25:18.311693       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0be09036a3ce230, ext:14150600332, loc:(*time.Location)(0x751a1a0)}}
I0906 20:25:18.311856       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (170.005µs)
I0906 20:25:18.311884       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-mkh47"
I0906 20:25:18.311907       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-mkh47, PodDisruptionBudget controller will avoid syncing.
... skipping 133 lines ...
I0906 20:25:44.813477       1 gc_controller.go:161] GC'ing orphaned
I0906 20:25:44.813506       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:25:44.893124       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:25:45.011003       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-control-plane-jfxgf"
E0906 20:25:45.045180       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0906 20:25:45.045271       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0906 20:25:45.759793       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0906 20:25:49.816227       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-control-plane-jfxgf ReadyCondition updated. Updating timestamp.
I0906 20:25:53.394184       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="70.402µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39944" resp=200
I0906 20:25:55.732762       1 endpoints_controller.go:557] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0
I0906 20:25:55.733216       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-qcxs2"
I0906 20:25:55.733273       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-qcxs2, PodDisruptionBudget controller will avoid syncing.
I0906 20:25:55.733280       1 disruption.go:430] No matching pdb for pod "metrics-server-8c95fb79b-qcxs2"
... skipping 95 lines ...
I0906 20:26:16.327350       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-lcwcec-md-0-mcztr}
I0906 20:26:16.327373       1 taint_manager.go:440] "Updating known taints on node" node="capz-lcwcec-md-0-mcztr" taints=[]
I0906 20:26:16.331475       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be0904d3721726, ext:19768211330, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:16.331840       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be091a13c756ec, ext:104773798116, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:16.331890       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-lcwcec-md-0-mcztr], creating 1
I0906 20:26:16.332872       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
W0906 20:26:16.332897       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-lcwcec-md-0-mcztr" does not exist
I0906 20:26:16.336749       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be090a8341654f, ext:42496584519, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:16.337074       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be091a14173fa9, ext:104779035141, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:16.337122       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-lcwcec-md-0-mcztr], creating 1
I0906 20:26:16.349230       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
I0906 20:26:16.358516       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
I0906 20:26:16.358918       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-lcwcec-md-0-mcztr" new_ttl="0s"
... skipping 118 lines ...
I0906 20:26:18.945138       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-lcwcec-md-0-jzv54}
I0906 20:26:18.945206       1 taint_manager.go:440] "Updating known taints on node" node="capz-lcwcec-md-0-jzv54" taints=[]
I0906 20:26:18.945939       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be091a1931b1d7, ext:104864654287, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:18.946048       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be091ab8637b32, ext:107388010894, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:18.946253       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-lcwcec-md-0-jzv54], creating 1
I0906 20:26:18.946672       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
W0906 20:26:18.948638       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-lcwcec-md-0-jzv54" does not exist
I0906 20:26:18.952038       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be091aa3beead9, ext:107041681617, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:18.952300       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be091ab8c2ddc0, ext:107394261944, loc:(*time.Location)(0x751a1a0)}}
I0906 20:26:18.952440       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-lcwcec-md-0-jzv54], creating 1
I0906 20:26:18.966666       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-lcwcec-md-0-jzv54" new_ttl="0s"
I0906 20:26:18.966984       1 controller_utils.go:581] Controller kube-proxy created pod kube-proxy-9ndgr
I0906 20:26:18.967083       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
... skipping 448 lines ...
I0906 20:26:49.063249       1 controller.go:804] Finished updateLoadBalancerHosts
I0906 20:26:49.063255       1 controller.go:760] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0906 20:26:49.063262       1 controller.go:731] It took 9.4602e-05 seconds to finish nodeSyncInternal
I0906 20:26:49.062982       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
I0906 20:26:49.089552       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
I0906 20:26:49.091669       1 controller_utils.go:221] Made sure that Node capz-lcwcec-md-0-jzv54 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0906 20:26:49.825008       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-lcwcec-md-0-mcztr transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 20:26:26 +0000 UTC,LastTransitionTime:2022-09-06 20:26:16 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 20:26:46 +0000 UTC,LastTransitionTime:2022-09-06 20:26:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 20:26:49.825100       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-mcztr ReadyCondition updated. Updating timestamp.
I0906 20:26:49.836520       1 node_lifecycle_controller.go:893] Node capz-lcwcec-md-0-mcztr is healthy again, removing all taints
I0906 20:26:49.836611       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-lcwcec-md-0-jzv54 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 20:26:29 +0000 UTC,LastTransitionTime:2022-09-06 20:26:18 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 20:26:49 +0000 UTC,LastTransitionTime:2022-09-06 20:26:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 20:26:49.836708       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-jzv54 ReadyCondition updated. Updating timestamp.
I0906 20:26:49.839297       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-lcwcec-md-0-mcztr}
I0906 20:26:49.841308       1 taint_manager.go:440] "Updating known taints on node" node="capz-lcwcec-md-0-mcztr" taints=[]
I0906 20:26:49.841237       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
I0906 20:26:49.841568       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-lcwcec-md-0-mcztr"
I0906 20:26:49.858718       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-lcwcec-md-0-jzv54}
... skipping 311 lines ...
I0906 20:28:54.305056       1 pv_controller.go:1108] reclaimVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: policy is Delete
I0906 20:28:54.305069       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]]
I0906 20:28:54.305075       1 pv_controller.go:1763] operation "delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]" is already running, skipping
I0906 20:28:54.305090       1 pv_protection_controller.go:205] Got event on PV pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10
I0906 20:28:54.307075       1 pv_controller.go:1340] isVolumeReleased[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is released
I0906 20:28:54.307256       1 pv_controller.go:1404] doDeleteVolume [pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]
I0906 20:28:54.352048       1 pv_controller.go:1259] deletion of volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:28:54.352075       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: set phase Failed
I0906 20:28:54.352087       1 pv_controller.go:858] updating PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: set phase Failed
I0906 20:28:54.355782       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" with version 1199
I0906 20:28:54.356370       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: phase: Failed, bound to: "azuredisk-8081/pvc-cxbbh (uid: 6e1ff2b0-1699-4e2e-9c2d-d1067f556d10)", boundByController: true
I0906 20:28:54.356667       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is bound to claim azuredisk-8081/pvc-cxbbh
I0906 20:28:54.356834       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: claim azuredisk-8081/pvc-cxbbh not found
I0906 20:28:54.357009       1 pv_controller.go:1108] reclaimVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: policy is Delete
I0906 20:28:54.355873       1 pv_protection_controller.go:205] Got event on PV pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10
I0906 20:28:54.356893       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" with version 1199
I0906 20:28:54.357454       1 pv_controller.go:879] volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" entered phase "Failed"
I0906 20:28:54.357620       1 pv_controller.go:901] volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
E0906 20:28:54.357918       1 goroutinemap.go:150] Operation for "delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]" failed. No retries permitted until 2022-09-06 20:28:54.857835266 +0000 UTC m=+263.299802462 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:28:54.357188       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]]
I0906 20:28:54.358177       1 pv_controller.go:1765] operation "delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]" postponed due to exponential backoff
I0906 20:28:54.358005       1 event.go:291] "Event occurred" object="pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted"
I0906 20:28:59.177331       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
I0906 20:28:59.177512       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 to the node "capz-lcwcec-md-0-jzv54" mounted false
I0906 20:28:59.235453       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-lcwcec-md-0-jzv54" succeeded. VolumesAttached: []
... skipping 5 lines ...
I0906 20:28:59.274083       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10"
I0906 20:28:59.274097       1 azure_controller_standard.go:166] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10)
I0906 20:28:59.790265       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:28:59.881227       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-jzv54 ReadyCondition updated. Updating timestamp.
I0906 20:28:59.899613       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:28:59.899685       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" with version 1199
I0906 20:28:59.899726       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: phase: Failed, bound to: "azuredisk-8081/pvc-cxbbh (uid: 6e1ff2b0-1699-4e2e-9c2d-d1067f556d10)", boundByController: true
I0906 20:28:59.899800       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is bound to claim azuredisk-8081/pvc-cxbbh
I0906 20:28:59.899879       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: claim azuredisk-8081/pvc-cxbbh not found
I0906 20:28:59.899895       1 pv_controller.go:1108] reclaimVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: policy is Delete
I0906 20:28:59.899925       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]]
I0906 20:28:59.899991       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10] started
I0906 20:28:59.903418       1 pv_controller.go:1340] isVolumeReleased[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is released
I0906 20:28:59.903438       1 pv_controller.go:1404] doDeleteVolume [pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]
I0906 20:28:59.903474       1 pv_controller.go:1259] deletion of volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) since it's in attaching or detaching state
I0906 20:28:59.903489       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: set phase Failed
I0906 20:28:59.903499       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: phase Failed already set
E0906 20:28:59.903525       1 goroutinemap.go:150] Operation for "delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]" failed. No retries permitted until 2022-09-06 20:29:00.903508401 +0000 UTC m=+269.345475497 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) since it's in attaching or detaching state
I0906 20:29:03.387003       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="112.402µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59572" resp=200
I0906 20:29:04.820034       1 gc_controller.go:161] GC'ing orphaned
I0906 20:29:04.820206       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:29:09.841819       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) returned with <nil>
I0906 20:29:09.841887       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10) succeeded
I0906 20:29:09.841900       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10 was detached from node:capz-lcwcec-md-0-jzv54
I0906 20:29:09.841952       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10") on node "capz-lcwcec-md-0-jzv54" 
I0906 20:29:13.385377       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="73.602µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46544" resp=200
I0906 20:29:14.732929       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:29:14.790441       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:29:14.900667       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:29:14.900734       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" with version 1199
I0906 20:29:14.900775       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: phase: Failed, bound to: "azuredisk-8081/pvc-cxbbh (uid: 6e1ff2b0-1699-4e2e-9c2d-d1067f556d10)", boundByController: true
I0906 20:29:14.900812       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is bound to claim azuredisk-8081/pvc-cxbbh
I0906 20:29:14.900828       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: claim azuredisk-8081/pvc-cxbbh not found
I0906 20:29:14.900836       1 pv_controller.go:1108] reclaimVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: policy is Delete
I0906 20:29:14.900852       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]]
I0906 20:29:14.900893       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10] started
I0906 20:29:14.916001       1 pv_controller.go:1340] isVolumeReleased[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is released
... skipping 2 lines ...
I0906 20:29:19.205273       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
I0906 20:29:19.884331       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-jzv54 ReadyCondition updated. Updating timestamp.
I0906 20:29:20.294088       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10
I0906 20:29:20.294141       1 pv_controller.go:1435] volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" deleted
I0906 20:29:20.294156       1 pv_controller.go:1283] deleteVolumeOperation [pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: success
I0906 20:29:20.299900       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10" with version 1241
I0906 20:29:20.300174       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: phase: Failed, bound to: "azuredisk-8081/pvc-cxbbh (uid: 6e1ff2b0-1699-4e2e-9c2d-d1067f556d10)", boundByController: true
I0906 20:29:20.300303       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: volume is bound to claim azuredisk-8081/pvc-cxbbh
I0906 20:29:20.300330       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: claim azuredisk-8081/pvc-cxbbh not found
I0906 20:29:20.300385       1 pv_controller.go:1108] reclaimVolume[pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10]: policy is Delete
I0906 20:29:20.300402       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10[c257eb8d-025b-43ce-acf2-c599ae1bb882]]
I0906 20:29:20.300469       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10] started
I0906 20:29:20.300668       1 pv_protection_controller.go:205] Got event on PV pvc-6e1ff2b0-1699-4e2e-9c2d-d1067f556d10
... skipping 156 lines ...
I0906 20:29:29.920793       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" to node "capz-lcwcec-md-0-mcztr".
I0906 20:29:29.938476       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (235.389939ms)
I0906 20:29:29.974103       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" lun 0 to node "capz-lcwcec-md-0-mcztr".
I0906 20:29:29.974435       1 azure_controller_standard.go:93] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-mcztr) - attach disk(capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) with DiskEncryptionSetID()
I0906 20:29:30.411875       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0906 20:29:30.459242       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-pppv2, uid 866d2b38-4330-4270-834f-5059985af88b, event type delete
E0906 20:29:30.480976       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-7sdv7" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0906 20:29:30.504051       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid ef1dd1f2-b577-4d22-b6ba-0990a419733e, event type delete
I0906 20:29:30.506450       1 publisher.go:186] Finished syncing namespace "azuredisk-2540" (2.491157ms)
I0906 20:29:30.531531       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.4µs)
I0906 20:29:30.531571       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid 97583be9-7382-45b4-8590-0b90bbfacdaa, event type delete
I0906 20:29:30.531669       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0906 20:29:30.545481       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.9µs)
I0906 20:29:30.545760       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0906 20:29:30.558067       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (150.247672ms)
I0906 20:29:31.157660       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0906 20:29:31.186876       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid 033ff53f-0aa9-4b3b-bf5d-737c02725cdb, event type delete
I0906 20:29:31.189553       1 publisher.go:186] Finished syncing namespace "azuredisk-4728" (2.777264ms)
I0906 20:29:31.237085       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-9rr4x, uid 2557414b-feb9-4595-8b6a-fe03daebb50f, event type delete
E0906 20:29:31.247847       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-gvjq2" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0906 20:29:31.248520       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0906 20:29:31.248567       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.7µs)
I0906 20:29:31.249471       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid cda470d8-fd33-4046-bce8-dcb403188c59, event type delete
I0906 20:29:31.288503       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.8µs)
I0906 20:29:31.288809       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4728, estimate: 0, errors: <nil>
I0906 20:29:31.297478       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4728" (143.577019ms)
... skipping 356 lines ...
I0906 20:31:34.136451       1 pv_controller.go:1108] reclaimVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: policy is Delete
I0906 20:31:34.136480       1 pv_controller.go:1752] scheduleOperation[delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]]
I0906 20:31:34.136488       1 pv_controller.go:1763] operation "delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]" is already running, skipping
I0906 20:31:34.136514       1 pv_controller.go:1231] deleteVolumeOperation [pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084] started
I0906 20:31:34.143890       1 pv_controller.go:1340] isVolumeReleased[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is released
I0906 20:31:34.143908       1 pv_controller.go:1404] doDeleteVolume [pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]
I0906 20:31:34.198829       1 pv_controller.go:1259] deletion of volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:31:34.198857       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: set phase Failed
I0906 20:31:34.198867       1 pv_controller.go:858] updating PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: set phase Failed
I0906 20:31:34.202597       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" with version 1505
I0906 20:31:34.202655       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: phase: Failed, bound to: "azuredisk-5466/pvc-mc9qt (uid: 922e69ff-8c3a-4e50-a09f-bb6ba8d61084)", boundByController: true
I0906 20:31:34.202705       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is bound to claim azuredisk-5466/pvc-mc9qt
I0906 20:31:34.202726       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: claim azuredisk-5466/pvc-mc9qt not found
I0906 20:31:34.202735       1 pv_controller.go:1108] reclaimVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: policy is Delete
I0906 20:31:34.202750       1 pv_controller.go:1752] scheduleOperation[delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]]
I0906 20:31:34.202757       1 pv_controller.go:1763] operation "delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]" is already running, skipping
I0906 20:31:34.202790       1 pv_protection_controller.go:205] Got event on PV pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084
I0906 20:31:34.203223       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" with version 1505
I0906 20:31:34.203246       1 pv_controller.go:879] volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" entered phase "Failed"
I0906 20:31:34.203256       1 pv_controller.go:901] volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
E0906 20:31:34.203296       1 goroutinemap.go:150] Operation for "delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]" failed. No retries permitted until 2022-09-06 20:31:34.70327723 +0000 UTC m=+423.145244426 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:31:34.203566       1 event.go:291] "Event occurred" object="pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted"
I0906 20:31:36.599636       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
I0906 20:31:36.599671       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 to the node "capz-lcwcec-md-0-mcztr" mounted false
I0906 20:31:36.694543       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-lcwcec-md-0-mcztr" succeeded. VolumesAttached: []
I0906 20:31:36.694696       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084") on node "capz-lcwcec-md-0-mcztr" 
I0906 20:31:36.694960       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
... skipping 11 lines ...
I0906 20:31:44.735616       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:31:44.798419       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:31:44.824598       1 gc_controller.go:161] GC'ing orphaned
I0906 20:31:44.824634       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:31:44.906207       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:31:44.906426       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" with version 1505
I0906 20:31:44.906484       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: phase: Failed, bound to: "azuredisk-5466/pvc-mc9qt (uid: 922e69ff-8c3a-4e50-a09f-bb6ba8d61084)", boundByController: true
I0906 20:31:44.906519       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is bound to claim azuredisk-5466/pvc-mc9qt
I0906 20:31:44.906558       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: claim azuredisk-5466/pvc-mc9qt not found
I0906 20:31:44.906590       1 pv_controller.go:1108] reclaimVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: policy is Delete
I0906 20:31:44.906649       1 pv_controller.go:1752] scheduleOperation[delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]]
I0906 20:31:44.906722       1 pv_controller.go:1231] deleteVolumeOperation [pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084] started
I0906 20:31:44.913679       1 pv_controller.go:1340] isVolumeReleased[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is released
I0906 20:31:44.913697       1 pv_controller.go:1404] doDeleteVolume [pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]
I0906 20:31:44.913729       1 pv_controller.go:1259] deletion of volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) since it's in attaching or detaching state
I0906 20:31:44.913879       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: set phase Failed
I0906 20:31:44.913969       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: phase Failed already set
E0906 20:31:44.914071       1 goroutinemap.go:150] Operation for "delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]" failed. No retries permitted until 2022-09-06 20:31:45.914047 +0000 UTC m=+434.356014196 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) since it's in attaching or detaching state
I0906 20:31:45.322269       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:31:46.895074       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 21 items received
I0906 20:31:49.730507       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 22 items received
I0906 20:31:50.728842       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 14 items received
I0906 20:31:52.260662       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-mcztr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) returned with <nil>
I0906 20:31:52.260705       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084) succeeded
I0906 20:31:52.260716       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084 was detached from node:capz-lcwcec-md-0-mcztr
I0906 20:31:52.260744       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084") on node "capz-lcwcec-md-0-mcztr" 
I0906 20:31:53.385562       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.502µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47136" resp=200
I0906 20:31:59.799442       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:31:59.906903       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:31:59.907066       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" with version 1505
I0906 20:31:59.907178       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: phase: Failed, bound to: "azuredisk-5466/pvc-mc9qt (uid: 922e69ff-8c3a-4e50-a09f-bb6ba8d61084)", boundByController: true
I0906 20:31:59.907279       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is bound to claim azuredisk-5466/pvc-mc9qt
I0906 20:31:59.907360       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: claim azuredisk-5466/pvc-mc9qt not found
I0906 20:31:59.907434       1 pv_controller.go:1108] reclaimVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: policy is Delete
I0906 20:31:59.907543       1 pv_controller.go:1752] scheduleOperation[delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]]
I0906 20:31:59.907642       1 pv_controller.go:1231] deleteVolumeOperation [pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084] started
I0906 20:31:59.913017       1 pv_controller.go:1340] isVolumeReleased[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is released
... skipping 2 lines ...
I0906 20:32:04.825557       1 gc_controller.go:161] GC'ing orphaned
I0906 20:32:04.825597       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:32:05.085092       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084
I0906 20:32:05.085125       1 pv_controller.go:1435] volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" deleted
I0906 20:32:05.085134       1 pv_controller.go:1283] deleteVolumeOperation [pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: success
I0906 20:32:05.096722       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084" with version 1553
I0906 20:32:05.096763       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: phase: Failed, bound to: "azuredisk-5466/pvc-mc9qt (uid: 922e69ff-8c3a-4e50-a09f-bb6ba8d61084)", boundByController: true
I0906 20:32:05.096790       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: volume is bound to claim azuredisk-5466/pvc-mc9qt
I0906 20:32:05.096809       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: claim azuredisk-5466/pvc-mc9qt not found
I0906 20:32:05.096821       1 pv_controller.go:1108] reclaimVolume[pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084]: policy is Delete
I0906 20:32:05.096833       1 pv_controller.go:1752] scheduleOperation[delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]]
I0906 20:32:05.096837       1 pv_controller.go:1763] operation "delete-pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084[89de9314-df8f-432a-9b6a-2b067e8b5dbf]" is already running, skipping
I0906 20:32:05.096847       1 pv_protection_controller.go:205] Got event on PV pvc-922e69ff-8c3a-4e50-a09f-bb6ba8d61084
... skipping 121 lines ...
I0906 20:32:14.494918       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-lcq44.17125f820166bbfd, uid 30a45543-322a-43ab-8a69-748ed81dbdf8, event type delete
I0906 20:32:14.504808       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-lcq44.17125f89dca1b805, uid 1df9130e-d672-4cc5-946a-34067228eecc, event type delete
I0906 20:32:14.510194       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-lcq44.17125f91f1337f89, uid aa186c01-f66a-46c3-9e66-32bdde2e68b3, event type delete
I0906 20:32:14.514365       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name pvc-mc9qt.17125f7495edd6aa, uid d2110975-4688-43a1-abc1-103e1a92dbfa, event type delete
I0906 20:32:14.522816       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name pvc-mc9qt.17125f7526e483da, uid af874949-a113-4f22-9fec-baae870018ae, event type delete
I0906 20:32:14.566585       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-r2f8d, uid e3d894d5-1c96-4fe5-b7b6-919dec47bcad, event type delete
E0906 20:32:14.579952       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-p7lmq" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0906 20:32:14.585077       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5466/default), service account deleted, removing tokens
I0906 20:32:14.585987       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5466, name default, uid 829de089-7571-4512-b905-108120505339, event type delete
I0906 20:32:14.586244       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (4µs)
I0906 20:32:14.609432       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (2.2µs)
I0906 20:32:14.610337       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5466, estimate: 0, errors: <nil>
I0906 20:32:14.620151       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5466" (233.721064ms)
... skipping 153 lines ...
I0906 20:32:30.707957       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: claim azuredisk-2790/pvc-kxn55 not found
I0906 20:32:30.707999       1 pv_controller.go:1108] reclaimVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: policy is Delete
I0906 20:32:30.708020       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]]
I0906 20:32:30.708028       1 pv_controller.go:1763] operation "delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]" is already running, skipping
I0906 20:32:30.721175       1 pv_controller.go:1340] isVolumeReleased[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: volume is released
I0906 20:32:30.721340       1 pv_controller.go:1404] doDeleteVolume [pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]
I0906 20:32:30.721503       1 pv_controller.go:1259] deletion of volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467) since it's in attaching or detaching state
I0906 20:32:30.721611       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: set phase Failed
I0906 20:32:30.721755       1 pv_controller.go:858] updating PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: set phase Failed
I0906 20:32:30.724725       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" with version 1655
I0906 20:32:30.725045       1 pv_controller.go:879] volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" entered phase "Failed"
I0906 20:32:30.725065       1 pv_controller.go:901] volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467) since it's in attaching or detaching state
E0906 20:32:30.725195       1 goroutinemap.go:150] Operation for "delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]" failed. No retries permitted until 2022-09-06 20:32:31.225122208 +0000 UTC m=+479.667089404 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467) since it's in attaching or detaching state
I0906 20:32:30.725456       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" with version 1655
I0906 20:32:30.725640       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: phase: Failed, bound to: "azuredisk-2790/pvc-kxn55 (uid: 41ec41b2-eb86-4bc5-846e-c3ae903be467)", boundByController: true
I0906 20:32:30.725786       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: volume is bound to claim azuredisk-2790/pvc-kxn55
I0906 20:32:30.725996       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: claim azuredisk-2790/pvc-kxn55 not found
I0906 20:32:30.726130       1 pv_controller.go:1108] reclaimVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: policy is Delete
I0906 20:32:30.726225       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]]
I0906 20:32:30.726363       1 pv_controller.go:1765] operation "delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]" postponed due to exponential backoff
I0906 20:32:30.725948       1 pv_protection_controller.go:205] Got event on PV pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467
I0906 20:32:30.725974       1 event.go:291] "Event occurred" object="pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467) since it's in attaching or detaching state"
I0906 20:32:31.803338       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 17 items received
I0906 20:32:33.385655       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.602µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46266" resp=200
I0906 20:32:39.862916       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467) returned with <nil>
I0906 20:32:39.862959       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467) succeeded
I0906 20:32:39.862967       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467 was detached from node:capz-lcwcec-md-0-jzv54
I0906 20:32:39.862993       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467") on node "capz-lcwcec-md-0-jzv54" 
... skipping 3 lines ...
I0906 20:32:44.736733       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:32:44.801957       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:32:44.826587       1 gc_controller.go:161] GC'ing orphaned
I0906 20:32:44.826615       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:32:44.909001       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:32:44.909087       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" with version 1655
I0906 20:32:44.909148       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: phase: Failed, bound to: "azuredisk-2790/pvc-kxn55 (uid: 41ec41b2-eb86-4bc5-846e-c3ae903be467)", boundByController: true
I0906 20:32:44.909183       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: volume is bound to claim azuredisk-2790/pvc-kxn55
I0906 20:32:44.909206       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: claim azuredisk-2790/pvc-kxn55 not found
I0906 20:32:44.909217       1 pv_controller.go:1108] reclaimVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: policy is Delete
I0906 20:32:44.909233       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]]
I0906 20:32:44.909277       1 pv_controller.go:1231] deleteVolumeOperation [pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467] started
I0906 20:32:44.914450       1 pv_controller.go:1340] isVolumeReleased[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: volume is released
I0906 20:32:44.914471       1 pv_controller.go:1404] doDeleteVolume [pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]
I0906 20:32:45.358475       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:32:50.091199       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467
I0906 20:32:50.091236       1 pv_controller.go:1435] volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" deleted
I0906 20:32:50.091250       1 pv_controller.go:1283] deleteVolumeOperation [pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: success
I0906 20:32:50.100067       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467" with version 1683
I0906 20:32:50.100116       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: phase: Failed, bound to: "azuredisk-2790/pvc-kxn55 (uid: 41ec41b2-eb86-4bc5-846e-c3ae903be467)", boundByController: true
I0906 20:32:50.100147       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: volume is bound to claim azuredisk-2790/pvc-kxn55
I0906 20:32:50.100167       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: claim azuredisk-2790/pvc-kxn55 not found
I0906 20:32:50.100177       1 pv_controller.go:1108] reclaimVolume[pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467]: policy is Delete
I0906 20:32:50.100509       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]]
I0906 20:32:50.100573       1 pv_controller.go:1763] operation "delete-pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467[c5087324-3e87-414a-9594-be671c2d1571]" is already running, skipping
I0906 20:32:50.100619       1 pv_protection_controller.go:205] Got event on PV pvc-41ec41b2-eb86-4bc5-846e-c3ae903be467
... skipping 252 lines ...
I0906 20:33:14.431050       1 pv_controller.go:1108] reclaimVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: policy is Delete
I0906 20:33:14.431246       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]]
I0906 20:33:14.431370       1 pv_controller.go:1763] operation "delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]" is already running, skipping
I0906 20:33:14.430507       1 pv_protection_controller.go:205] Got event on PV pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256
I0906 20:33:14.433992       1 pv_controller.go:1340] isVolumeReleased[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is released
I0906 20:33:14.434010       1 pv_controller.go:1404] doDeleteVolume [pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]
I0906 20:33:14.455455       1 pv_controller.go:1259] deletion of volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:33:14.455636       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: set phase Failed
I0906 20:33:14.455683       1 pv_controller.go:858] updating PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: set phase Failed
I0906 20:33:14.458535       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" with version 1773
I0906 20:33:14.458982       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: phase: Failed, bound to: "azuredisk-5356/pvc-zmh7b (uid: a7f61aee-f317-4b31-aa15-fa5ac1c29256)", boundByController: true
I0906 20:33:14.459232       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is bound to claim azuredisk-5356/pvc-zmh7b
I0906 20:33:14.459434       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: claim azuredisk-5356/pvc-zmh7b not found
I0906 20:33:14.459610       1 pv_controller.go:1108] reclaimVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: policy is Delete
I0906 20:33:14.459747       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]]
I0906 20:33:14.459851       1 pv_controller.go:1763] operation "delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]" is already running, skipping
I0906 20:33:14.458827       1 pv_protection_controller.go:205] Got event on PV pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256
I0906 20:33:14.458935       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" with version 1773
I0906 20:33:14.460121       1 pv_controller.go:879] volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" entered phase "Failed"
I0906 20:33:14.460137       1 pv_controller.go:901] volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
E0906 20:33:14.460185       1 goroutinemap.go:150] Operation for "delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]" failed. No retries permitted until 2022-09-06 20:33:14.960167027 +0000 UTC m=+523.402134123 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:33:14.460323       1 event.go:291] "Event occurred" object="pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted"
I0906 20:33:14.737218       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:33:14.803487       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:33:14.910348       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:33:14.910430       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" with version 1773
I0906 20:33:14.910517       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: phase: Failed, bound to: "azuredisk-5356/pvc-zmh7b (uid: a7f61aee-f317-4b31-aa15-fa5ac1c29256)", boundByController: true
I0906 20:33:14.910619       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is bound to claim azuredisk-5356/pvc-zmh7b
I0906 20:33:14.910694       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: claim azuredisk-5356/pvc-zmh7b not found
I0906 20:33:14.910709       1 pv_controller.go:1108] reclaimVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: policy is Delete
I0906 20:33:14.910819       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]]
I0906 20:33:14.910895       1 pv_controller.go:1765] operation "delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]" postponed due to exponential backoff
I0906 20:33:15.374216       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
... skipping 15 lines ...
I0906 20:33:23.729562       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0906 20:33:24.828100       1 gc_controller.go:161] GC'ing orphaned
I0906 20:33:24.828134       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:33:29.804645       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:33:29.911025       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:33:29.911095       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" with version 1773
I0906 20:33:29.911135       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: phase: Failed, bound to: "azuredisk-5356/pvc-zmh7b (uid: a7f61aee-f317-4b31-aa15-fa5ac1c29256)", boundByController: true
I0906 20:33:29.911179       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is bound to claim azuredisk-5356/pvc-zmh7b
I0906 20:33:29.911208       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: claim azuredisk-5356/pvc-zmh7b not found
I0906 20:33:29.911221       1 pv_controller.go:1108] reclaimVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: policy is Delete
I0906 20:33:29.911240       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]]
I0906 20:33:29.911273       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256] started
I0906 20:33:29.916817       1 pv_controller.go:1340] isVolumeReleased[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is released
I0906 20:33:29.916842       1 pv_controller.go:1404] doDeleteVolume [pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]
I0906 20:33:29.916915       1 pv_controller.go:1259] deletion of volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) since it's in attaching or detaching state
I0906 20:33:29.916991       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: set phase Failed
I0906 20:33:29.917009       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: phase Failed already set
E0906 20:33:29.917076       1 goroutinemap.go:150] Operation for "delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]" failed. No retries permitted until 2022-09-06 20:33:30.917030508 +0000 UTC m=+539.358997604 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) since it's in attaching or detaching state
I0906 20:33:32.168791       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-mcztr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) returned with <nil>
I0906 20:33:32.168830       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256) succeeded
I0906 20:33:32.168841       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256 was detached from node:capz-lcwcec-md-0-mcztr
I0906 20:33:32.168890       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256") on node "capz-lcwcec-md-0-mcztr" 
I0906 20:33:32.804537       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 0 items received
I0906 20:33:33.385831       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="63.202µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55364" resp=200
I0906 20:33:43.385445       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.102µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55380" resp=200
I0906 20:33:44.737936       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:33:44.805766       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:33:44.828940       1 gc_controller.go:161] GC'ing orphaned
I0906 20:33:44.828976       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:33:44.911532       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:33:44.911705       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" with version 1773
I0906 20:33:44.911788       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: phase: Failed, bound to: "azuredisk-5356/pvc-zmh7b (uid: a7f61aee-f317-4b31-aa15-fa5ac1c29256)", boundByController: true
I0906 20:33:44.911862       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is bound to claim azuredisk-5356/pvc-zmh7b
I0906 20:33:44.911923       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: claim azuredisk-5356/pvc-zmh7b not found
I0906 20:33:44.911938       1 pv_controller.go:1108] reclaimVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: policy is Delete
I0906 20:33:44.911969       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]]
I0906 20:33:44.912006       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256] started
I0906 20:33:44.920758       1 pv_controller.go:1340] isVolumeReleased[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is released
... skipping 2 lines ...
I0906 20:33:45.735856       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 0 items received
I0906 20:33:45.822375       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 11 items received
I0906 20:33:50.115522       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256
I0906 20:33:50.115555       1 pv_controller.go:1435] volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" deleted
I0906 20:33:50.115568       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: success
I0906 20:33:50.124419       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256" with version 1829
I0906 20:33:50.124460       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: phase: Failed, bound to: "azuredisk-5356/pvc-zmh7b (uid: a7f61aee-f317-4b31-aa15-fa5ac1c29256)", boundByController: true
I0906 20:33:50.124498       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: volume is bound to claim azuredisk-5356/pvc-zmh7b
I0906 20:33:50.124517       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: claim azuredisk-5356/pvc-zmh7b not found
I0906 20:33:50.124531       1 pv_controller.go:1108] reclaimVolume[pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256]: policy is Delete
I0906 20:33:50.124546       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256[42a92adf-6450-43f1-b443-dfa29f15ec18]]
I0906 20:33:50.124572       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256] started
I0906 20:33:50.124835       1 pv_protection_controller.go:205] Got event on PV pvc-a7f61aee-f317-4b31-aa15-fa5ac1c29256
... skipping 134 lines ...
I0906 20:33:59.912732       1 pv_controller.go:1038] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" bound to claim "azuredisk-5194/pvc-45vbn"
I0906 20:33:59.912749       1 pv_controller.go:1039] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:33:59.912772       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-45vbn" status after binding: phase: Bound, bound to: "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69", bindCompleted: true, boundByController: true
I0906 20:33:59.920074       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0906 20:33:59.927484       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-jzv54 ReadyCondition updated. Updating timestamp.
I0906 20:33:59.937100       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-ndm58, uid 39323f6d-f2a6-4399-abdb-64ef934a66f8, event type delete
E0906 20:33:59.948334       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-52m6w" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0906 20:33:59.957138       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0906 20:33:59.957330       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (2.8µs)
I0906 20:33:59.957393       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid cf10e1b8-a338-40ca-915c-7cb64e68f346, event type delete
I0906 20:33:59.968578       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid 61dbe540-1c89-485c-a067-1d3a913b709a, event type delete
I0906 20:33:59.971002       1 publisher.go:186] Finished syncing namespace "azuredisk-5356" (2.622258ms)
I0906 20:34:00.033781       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-q7xr4.17125fa500d7a7af, uid 1c42a086-4564-481d-a78d-08338e91d469, event type delete
... skipping 682 lines ...
I0906 20:35:21.579772       1 pv_controller.go:1108] reclaimVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: policy is Delete
I0906 20:35:21.579802       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]]
I0906 20:35:21.579828       1 pv_controller.go:1763] operation "delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]" is already running, skipping
I0906 20:35:21.579896       1 pv_controller.go:1231] deleteVolumeOperation [pvc-2fdef245-5695-4274-98cb-008e1afa81f9] started
I0906 20:35:21.581461       1 pv_controller.go:1340] isVolumeReleased[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: volume is released
I0906 20:35:21.581482       1 pv_controller.go:1404] doDeleteVolume [pvc-2fdef245-5695-4274-98cb-008e1afa81f9]
I0906 20:35:21.604092       1 pv_controller.go:1259] deletion of volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:35:21.604114       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: set phase Failed
I0906 20:35:21.604122       1 pv_controller.go:858] updating PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: set phase Failed
I0906 20:35:21.607701       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" with version 2063
I0906 20:35:21.607943       1 pv_controller.go:879] volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" entered phase "Failed"
I0906 20:35:21.607960       1 pv_controller.go:901] volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
E0906 20:35:21.608029       1 goroutinemap.go:150] Operation for "delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]" failed. No retries permitted until 2022-09-06 20:35:22.107982773 +0000 UTC m=+650.549949969 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:35:21.607760       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" with version 2063
I0906 20:35:21.608235       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: phase: Failed, bound to: "azuredisk-5194/pvc-5c87b (uid: 2fdef245-5695-4274-98cb-008e1afa81f9)", boundByController: true
I0906 20:35:21.608387       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: volume is bound to claim azuredisk-5194/pvc-5c87b
I0906 20:35:21.608515       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: claim azuredisk-5194/pvc-5c87b not found
I0906 20:35:21.608618       1 pv_controller.go:1108] reclaimVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: policy is Delete
I0906 20:35:21.608672       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]]
I0906 20:35:21.608706       1 pv_controller.go:1765] operation "delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]" postponed due to exponential backoff
I0906 20:35:21.607773       1 pv_protection_controller.go:205] Got event on PV pvc-2fdef245-5695-4274-98cb-008e1afa81f9
... skipping 30 lines ...
I0906 20:35:29.916254       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is bound to claim azuredisk-5194/pvc-jtkbc
I0906 20:35:29.916279       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc found: phase: Bound, bound to: "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2", bindCompleted: true, boundByController: true
I0906 20:35:29.916336       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: all is bound
I0906 20:35:29.916349       1 pv_controller.go:858] updating PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: set phase Bound
I0906 20:35:29.916358       1 pv_controller.go:861] updating PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase Bound already set
I0906 20:35:29.916398       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" with version 2063
I0906 20:35:29.916437       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: phase: Failed, bound to: "azuredisk-5194/pvc-5c87b (uid: 2fdef245-5695-4274-98cb-008e1afa81f9)", boundByController: true
I0906 20:35:29.916489       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: volume is bound to claim azuredisk-5194/pvc-5c87b
I0906 20:35:29.916526       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: claim azuredisk-5194/pvc-5c87b not found
I0906 20:35:29.916535       1 pv_controller.go:1108] reclaimVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: policy is Delete
I0906 20:35:29.916578       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]]
I0906 20:35:29.916623       1 pv_controller.go:1231] deleteVolumeOperation [pvc-2fdef245-5695-4274-98cb-008e1afa81f9] started
I0906 20:35:29.915895       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-45vbn]: phase: Bound, bound to: "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69", bindCompleted: true, boundByController: true
... skipping 26 lines ...
I0906 20:35:29.919727       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-jtkbc] status: phase Bound already set
I0906 20:35:29.919763       1 pv_controller.go:1038] volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" bound to claim "azuredisk-5194/pvc-jtkbc"
I0906 20:35:29.919800       1 pv_controller.go:1039] volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-jtkbc (uid: 40df834f-33ed-4213-8d8f-1dacee3468c2)", boundByController: true
I0906 20:35:29.919820       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-jtkbc" status after binding: phase: Bound, bound to: "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2", bindCompleted: true, boundByController: true
I0906 20:35:29.923878       1 pv_controller.go:1340] isVolumeReleased[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: volume is released
I0906 20:35:29.924019       1 pv_controller.go:1404] doDeleteVolume [pvc-2fdef245-5695-4274-98cb-008e1afa81f9]
I0906 20:35:29.924062       1 pv_controller.go:1259] deletion of volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) since it's in attaching or detaching state
I0906 20:35:29.924141       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: set phase Failed
I0906 20:35:29.924248       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: phase Failed already set
E0906 20:35:29.924397       1 goroutinemap.go:150] Operation for "delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]" failed. No retries permitted until 2022-09-06 20:35:30.924361087 +0000 UTC m=+659.366328283 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) since it's in attaching or detaching state
I0906 20:35:29.938358       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-mcztr ReadyCondition updated. Updating timestamp.
I0906 20:35:33.385745       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="74.402µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34936" resp=200
I0906 20:35:38.729412       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
I0906 20:35:42.458677       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-mcztr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) returned with <nil>
I0906 20:35:42.458719       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9) succeeded
I0906 20:35:42.458730       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9 was detached from node:capz-lcwcec-md-0-mcztr
... skipping 16 lines ...
I0906 20:35:44.917438       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is bound to claim azuredisk-5194/pvc-jtkbc
I0906 20:35:44.917487       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc found: phase: Bound, bound to: "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2", bindCompleted: true, boundByController: true
I0906 20:35:44.917504       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: all is bound
I0906 20:35:44.917511       1 pv_controller.go:858] updating PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: set phase Bound
I0906 20:35:44.917521       1 pv_controller.go:861] updating PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase Bound already set
I0906 20:35:44.917534       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" with version 2063
I0906 20:35:44.917574       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: phase: Failed, bound to: "azuredisk-5194/pvc-5c87b (uid: 2fdef245-5695-4274-98cb-008e1afa81f9)", boundByController: true
I0906 20:35:44.917615       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: volume is bound to claim azuredisk-5194/pvc-5c87b
I0906 20:35:44.917639       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: claim azuredisk-5194/pvc-5c87b not found
I0906 20:35:44.917647       1 pv_controller.go:1108] reclaimVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: policy is Delete
I0906 20:35:44.917662       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]]
I0906 20:35:44.916894       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-45vbn" with version 1860
I0906 20:35:44.917713       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-45vbn]: phase: Bound, bound to: "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69", bindCompleted: true, boundByController: true
... skipping 34 lines ...
I0906 20:35:45.590788       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-control-plane-jfxgf"
I0906 20:35:49.941147       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-control-plane-jfxgf ReadyCondition updated. Updating timestamp.
I0906 20:35:50.100915       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-2fdef245-5695-4274-98cb-008e1afa81f9
I0906 20:35:50.100962       1 pv_controller.go:1435] volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" deleted
I0906 20:35:50.100976       1 pv_controller.go:1283] deleteVolumeOperation [pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: success
I0906 20:35:50.106952       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fdef245-5695-4274-98cb-008e1afa81f9" with version 2110
I0906 20:35:50.107033       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: phase: Failed, bound to: "azuredisk-5194/pvc-5c87b (uid: 2fdef245-5695-4274-98cb-008e1afa81f9)", boundByController: true
I0906 20:35:50.106995       1 pv_protection_controller.go:205] Got event on PV pvc-2fdef245-5695-4274-98cb-008e1afa81f9
I0906 20:35:50.107090       1 pv_protection_controller.go:125] Processing PV pvc-2fdef245-5695-4274-98cb-008e1afa81f9
I0906 20:35:50.107425       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: volume is bound to claim azuredisk-5194/pvc-5c87b
I0906 20:35:50.107452       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: claim azuredisk-5194/pvc-5c87b not found
I0906 20:35:50.107460       1 pv_controller.go:1108] reclaimVolume[pvc-2fdef245-5695-4274-98cb-008e1afa81f9]: policy is Delete
I0906 20:35:50.107475       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fdef245-5695-4274-98cb-008e1afa81f9[181458be-4768-42d2-9e56-283778aac881]]
... skipping 188 lines ...
I0906 20:36:21.700192       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc not found
I0906 20:36:21.700307       1 pv_controller.go:1108] reclaimVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: policy is Delete
I0906 20:36:21.700417       1 pv_controller.go:1752] scheduleOperation[delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]]
I0906 20:36:21.700538       1 pv_controller.go:1763] operation "delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]" is already running, skipping
I0906 20:36:21.704742       1 pv_controller.go:1340] isVolumeReleased[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is released
I0906 20:36:21.704760       1 pv_controller.go:1404] doDeleteVolume [pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]
I0906 20:36:21.920978       1 pv_controller.go:1259] deletion of volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:36:21.921005       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: set phase Failed
I0906 20:36:21.921016       1 pv_controller.go:858] updating PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: set phase Failed
I0906 20:36:21.925677       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" with version 2168
I0906 20:36:21.926014       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase: Failed, bound to: "azuredisk-5194/pvc-jtkbc (uid: 40df834f-33ed-4213-8d8f-1dacee3468c2)", boundByController: true
I0906 20:36:21.926238       1 pv_protection_controller.go:205] Got event on PV pvc-40df834f-33ed-4213-8d8f-1dacee3468c2
I0906 20:36:21.926394       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is bound to claim azuredisk-5194/pvc-jtkbc
I0906 20:36:21.926556       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc not found
I0906 20:36:21.926614       1 pv_controller.go:1108] reclaimVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: policy is Delete
I0906 20:36:21.926826       1 pv_controller.go:1752] scheduleOperation[delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]]
I0906 20:36:21.927010       1 pv_controller.go:1763] operation "delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]" is already running, skipping
I0906 20:36:21.926118       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" with version 2168
I0906 20:36:21.927169       1 pv_controller.go:879] volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" entered phase "Failed"
I0906 20:36:21.927181       1 pv_controller.go:901] volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
E0906 20:36:21.927232       1 goroutinemap.go:150] Operation for "delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]" failed. No retries permitted until 2022-09-06 20:36:22.427212296 +0000 UTC m=+710.869179492 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:36:21.927499       1 event.go:291] "Event occurred" object="pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted"
I0906 20:36:23.385664       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.602µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37402" resp=200
I0906 20:36:24.746319       1 controller.go:272] Triggering nodeSync
I0906 20:36:24.746365       1 controller.go:291] nodeSync has been triggered
I0906 20:36:24.746380       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 20:36:24.746390       1 controller.go:804] Finished updateLoadBalancerHosts
... skipping 26 lines ...
I0906 20:36:29.920939       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-45vbn] status: set phase Bound
I0906 20:36:29.920981       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-45vbn] status: phase Bound already set
I0906 20:36:29.921001       1 pv_controller.go:1038] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" bound to claim "azuredisk-5194/pvc-45vbn"
I0906 20:36:29.921022       1 pv_controller.go:1039] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:36:29.921081       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-45vbn" status after binding: phase: Bound, bound to: "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69", bindCompleted: true, boundByController: true
I0906 20:36:29.919937       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" with version 2168
I0906 20:36:29.921238       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase: Failed, bound to: "azuredisk-5194/pvc-jtkbc (uid: 40df834f-33ed-4213-8d8f-1dacee3468c2)", boundByController: true
I0906 20:36:29.921332       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is bound to claim azuredisk-5194/pvc-jtkbc
I0906 20:36:29.921359       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc not found
I0906 20:36:29.921368       1 pv_controller.go:1108] reclaimVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: policy is Delete
I0906 20:36:29.921422       1 pv_controller.go:1752] scheduleOperation[delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]]
I0906 20:36:29.921456       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 1857
I0906 20:36:29.921531       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase: Bound, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
... skipping 2 lines ...
I0906 20:36:29.921784       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: all is bound
I0906 20:36:29.921801       1 pv_controller.go:858] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: set phase Bound
I0906 20:36:29.921826       1 pv_controller.go:861] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase Bound already set
I0906 20:36:29.921629       1 pv_controller.go:1231] deleteVolumeOperation [pvc-40df834f-33ed-4213-8d8f-1dacee3468c2] started
I0906 20:36:29.926060       1 pv_controller.go:1340] isVolumeReleased[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is released
I0906 20:36:29.926082       1 pv_controller.go:1404] doDeleteVolume [pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]
I0906 20:36:29.926147       1 pv_controller.go:1259] deletion of volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) since it's in attaching or detaching state
I0906 20:36:29.926185       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: set phase Failed
I0906 20:36:29.926239       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase Failed already set
E0906 20:36:29.926320       1 goroutinemap.go:150] Operation for "delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]" failed. No retries permitted until 2022-09-06 20:36:30.926265433 +0000 UTC m=+719.368232629 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) since it's in attaching or detaching state
I0906 20:36:29.948899       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-mcztr ReadyCondition updated. Updating timestamp.
I0906 20:36:32.244585       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 12 items received
I0906 20:36:33.385631       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="95.702µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58892" resp=200
I0906 20:36:42.389567       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-mcztr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) returned with <nil>
I0906 20:36:42.389611       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2) succeeded
I0906 20:36:42.389621       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2 was detached from node:capz-lcwcec-md-0-mcztr
... skipping 17 lines ...
I0906 20:36:44.920994       1 pv_controller.go:1012] binding volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" to claim "azuredisk-5194/pvc-45vbn"
I0906 20:36:44.921003       1 pv_controller.go:861] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase Bound already set
I0906 20:36:44.921005       1 pv_controller.go:910] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: binding to "azuredisk-5194/pvc-45vbn"
I0906 20:36:44.921017       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" with version 2168
I0906 20:36:44.921020       1 pv_controller.go:922] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: already bound to "azuredisk-5194/pvc-45vbn"
I0906 20:36:44.921027       1 pv_controller.go:858] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: set phase Bound
I0906 20:36:44.921034       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase: Failed, bound to: "azuredisk-5194/pvc-jtkbc (uid: 40df834f-33ed-4213-8d8f-1dacee3468c2)", boundByController: true
I0906 20:36:44.921035       1 pv_controller.go:861] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase Bound already set
I0906 20:36:44.921044       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-45vbn]: binding to "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69"
I0906 20:36:44.921057       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is bound to claim azuredisk-5194/pvc-jtkbc
I0906 20:36:44.921065       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5194/pvc-45vbn]: already bound to "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69"
I0906 20:36:44.921074       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc not found
I0906 20:36:44.921081       1 pv_controller.go:1108] reclaimVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: policy is Delete
... skipping 10 lines ...
I0906 20:36:45.806309       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0906 20:36:47.084333       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:36:50.228398       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2
I0906 20:36:50.228436       1 pv_controller.go:1435] volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" deleted
I0906 20:36:50.228450       1 pv_controller.go:1283] deleteVolumeOperation [pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: success
I0906 20:36:50.236710       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-40df834f-33ed-4213-8d8f-1dacee3468c2" with version 2212
I0906 20:36:50.236753       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: phase: Failed, bound to: "azuredisk-5194/pvc-jtkbc (uid: 40df834f-33ed-4213-8d8f-1dacee3468c2)", boundByController: true
I0906 20:36:50.236894       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: volume is bound to claim azuredisk-5194/pvc-jtkbc
I0906 20:36:50.237014       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: claim azuredisk-5194/pvc-jtkbc not found
I0906 20:36:50.237033       1 pv_controller.go:1108] reclaimVolume[pvc-40df834f-33ed-4213-8d8f-1dacee3468c2]: policy is Delete
I0906 20:36:50.237118       1 pv_controller.go:1752] scheduleOperation[delete-pvc-40df834f-33ed-4213-8d8f-1dacee3468c2[a1db27e4-f5d5-4c51-9642-5735bbe3dd18]]
I0906 20:36:50.237201       1 pv_controller.go:1231] deleteVolumeOperation [pvc-40df834f-33ed-4213-8d8f-1dacee3468c2] started
I0906 20:36:50.237390       1 pv_protection_controller.go:205] Got event on PV pvc-40df834f-33ed-4213-8d8f-1dacee3468c2
... skipping 147 lines ...
I0906 20:37:22.378757       1 pv_controller.go:1108] reclaimVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: policy is Delete
I0906 20:37:22.378769       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]]
I0906 20:37:22.378821       1 pv_controller.go:1763] operation "delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]" is already running, skipping
I0906 20:37:22.378909       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69] started
I0906 20:37:22.380646       1 pv_controller.go:1340] isVolumeReleased[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is released
I0906 20:37:22.380663       1 pv_controller.go:1404] doDeleteVolume [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]
I0906 20:37:22.406589       1 pv_controller.go:1259] deletion of volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:37:22.406610       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: set phase Failed
I0906 20:37:22.406619       1 pv_controller.go:858] updating PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: set phase Failed
I0906 20:37:22.409766       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 2272
I0906 20:37:22.409978       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase: Failed, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:37:22.410152       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is bound to claim azuredisk-5194/pvc-45vbn
I0906 20:37:22.410292       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: claim azuredisk-5194/pvc-45vbn not found
I0906 20:37:22.410431       1 pv_controller.go:1108] reclaimVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: policy is Delete
I0906 20:37:22.410556       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]]
I0906 20:37:22.410685       1 pv_controller.go:1763] operation "delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]" is already running, skipping
I0906 20:37:22.410814       1 pv_protection_controller.go:205] Got event on PV pvc-9e1ffb81-ecad-4100-964e-2026c0108d69
I0906 20:37:22.411139       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 2272
I0906 20:37:22.411182       1 pv_controller.go:879] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" entered phase "Failed"
I0906 20:37:22.411197       1 pv_controller.go:901] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
E0906 20:37:22.411234       1 goroutinemap.go:150] Operation for "delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]" failed. No retries permitted until 2022-09-06 20:37:22.911216816 +0000 UTC m=+771.353183912 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:37:22.411533       1 event.go:291] "Event occurred" object="pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted"
I0906 20:37:23.385059       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.402µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48852" resp=200
I0906 20:37:24.836109       1 gc_controller.go:161] GC'ing orphaned
I0906 20:37:24.836140       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:37:26.649313       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:37:27.822344       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 80 items received
... skipping 7 lines ...
I0906 20:37:29.631957       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 from node "capz-lcwcec-md-0-jzv54"
I0906 20:37:29.779329       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69"
I0906 20:37:29.779355       1 azure_controller_standard.go:166] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69)
I0906 20:37:29.814802       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:29.923263       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:37:29.923371       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 2272
I0906 20:37:29.923440       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase: Failed, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:37:29.923494       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is bound to claim azuredisk-5194/pvc-45vbn
I0906 20:37:29.923520       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: claim azuredisk-5194/pvc-45vbn not found
I0906 20:37:29.923533       1 pv_controller.go:1108] reclaimVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: policy is Delete
I0906 20:37:29.923571       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]]
I0906 20:37:29.923617       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69] started
I0906 20:37:29.926821       1 pv_controller.go:1340] isVolumeReleased[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is released
I0906 20:37:29.926837       1 pv_controller.go:1404] doDeleteVolume [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]
I0906 20:37:29.926902       1 pv_controller.go:1259] deletion of volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) since it's in attaching or detaching state
I0906 20:37:29.926920       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: set phase Failed
I0906 20:37:29.926964       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase Failed already set
E0906 20:37:29.927046       1 goroutinemap.go:150] Operation for "delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]" failed. No retries permitted until 2022-09-06 20:37:30.926996417 +0000 UTC m=+779.368963513 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) since it's in attaching or detaching state
I0906 20:37:29.956893       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-jzv54 ReadyCondition updated. Updating timestamp.
I0906 20:37:30.855028       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 17 items received
I0906 20:37:33.386027       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.502µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:52682" resp=200
I0906 20:37:35.734786       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 0 items received
I0906 20:37:35.807877       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 0 items received
I0906 20:37:43.385916       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="102.702µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56752" resp=200
I0906 20:37:44.745151       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:44.815933       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:44.836219       1 gc_controller.go:161] GC'ing orphaned
I0906 20:37:44.836312       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:37:44.923569       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:37:44.923651       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 2272
I0906 20:37:44.923752       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase: Failed, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:37:44.923837       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is bound to claim azuredisk-5194/pvc-45vbn
I0906 20:37:44.923916       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: claim azuredisk-5194/pvc-45vbn not found
I0906 20:37:44.924368       1 pv_controller.go:1108] reclaimVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: policy is Delete
I0906 20:37:44.924455       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]]
I0906 20:37:44.924538       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69] started
I0906 20:37:44.928780       1 pv_controller.go:1340] isVolumeReleased[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is released
I0906 20:37:44.928800       1 pv_controller.go:1404] doDeleteVolume [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]
I0906 20:37:44.928854       1 pv_controller.go:1259] deletion of volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) since it's in attaching or detaching state
I0906 20:37:44.928880       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: set phase Failed
I0906 20:37:44.928889       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase Failed already set
E0906 20:37:44.928955       1 goroutinemap.go:150] Operation for "delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]" failed. No retries permitted until 2022-09-06 20:37:46.928897974 +0000 UTC m=+795.370865070 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) since it's in attaching or detaching state
I0906 20:37:45.215119       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) returned with <nil>
I0906 20:37:45.215161       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69) succeeded
I0906 20:37:45.215171       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69 was detached from node:capz-lcwcec-md-0-jzv54
I0906 20:37:45.215197       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69") on node "capz-lcwcec-md-0-jzv54" 
I0906 20:37:45.568611       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:37:50.040594       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:37:53.385596       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="120.103µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51136" resp=200
I0906 20:37:53.808025       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0906 20:37:59.816692       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:59.924090       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:37:59.924351       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 2272
I0906 20:37:59.924425       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase: Failed, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:37:59.924516       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is bound to claim azuredisk-5194/pvc-45vbn
I0906 20:37:59.924596       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: claim azuredisk-5194/pvc-45vbn not found
I0906 20:37:59.924656       1 pv_controller.go:1108] reclaimVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: policy is Delete
I0906 20:37:59.924738       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]]
I0906 20:37:59.924843       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69] started
I0906 20:37:59.928960       1 pv_controller.go:1340] isVolumeReleased[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is released
... skipping 9 lines ...
I0906 20:38:04.836676       1 gc_controller.go:161] GC'ing orphaned
I0906 20:38:04.836707       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:38:05.329294       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69
I0906 20:38:05.329337       1 pv_controller.go:1435] volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" deleted
I0906 20:38:05.329351       1 pv_controller.go:1283] deleteVolumeOperation [pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: success
I0906 20:38:05.336301       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9e1ffb81-ecad-4100-964e-2026c0108d69" with version 2336
I0906 20:38:05.336340       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: phase: Failed, bound to: "azuredisk-5194/pvc-45vbn (uid: 9e1ffb81-ecad-4100-964e-2026c0108d69)", boundByController: true
I0906 20:38:05.336367       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: volume is bound to claim azuredisk-5194/pvc-45vbn
I0906 20:38:05.336382       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: claim azuredisk-5194/pvc-45vbn not found
I0906 20:38:05.336389       1 pv_controller.go:1108] reclaimVolume[pvc-9e1ffb81-ecad-4100-964e-2026c0108d69]: policy is Delete
I0906 20:38:05.336403       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]]
I0906 20:38:05.336409       1 pv_controller.go:1763] operation "delete-pvc-9e1ffb81-ecad-4100-964e-2026c0108d69[21ffee0d-9499-4803-809f-955b5fafb96e]" is already running, skipping
I0906 20:38:05.336426       1 pv_protection_controller.go:205] Got event on PV pvc-9e1ffb81-ecad-4100-964e-2026c0108d69
... skipping 49 lines ...
I0906 20:38:08.151468       1 pv_controller.go:1500] provisionClaimOperation [azuredisk-1353/pvc-vx77l]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0906 20:38:08.153235       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96" (6.502849ms)
I0906 20:38:08.153559       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96"
I0906 20:38:08.153702       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96", timestamp:time.Time{wall:0xc0be09cc07b03156, ext:816570954674, loc:(*time.Location)(0x751a1a0)}}
I0906 20:38:08.153844       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96" (147.004µs)
I0906 20:38:08.154910       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-mxzv6" duration="31.753728ms"
I0906 20:38:08.154942       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-mxzv6" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-mxzv6\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:38:08.154976       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-mxzv6" startTime="2022-09-06 20:38:08.154957473 +0000 UTC m=+816.596924669"
I0906 20:38:08.159995       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-mxzv6"
I0906 20:38:08.160065       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-vx77l"
I0906 20:38:08.160109       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-vx77l" with version 2360
I0906 20:38:08.160125       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-vx77l]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0906 20:38:08.160153       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-vx77l]: no volume found
... skipping 107 lines ...
I0906 20:38:12.642585       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-jtkbc.17125fb767e98c8f, uid 0a5a0e7e-8c05-4058-8b37-e7f1fefb3cc7, event type delete
I0906 20:38:12.645943       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-jtkbc.17125fb7f729a507, uid ba868c35-aab3-4ea1-96d9-3d46754b5e3c, event type delete
I0906 20:38:12.660821       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5194, name kube-root-ca.crt, uid cf16b0ad-d8d0-4ad1-8793-670c9da5bf71, event type delete
I0906 20:38:12.665541       1 publisher.go:186] Finished syncing namespace "azuredisk-5194" (4.946014ms)
I0906 20:38:12.743666       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-nmn9f, uid 86258244-9f77-48e7-ab6c-63d22860200e, event type delete
I0906 20:38:12.755087       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (2.301µs)
E0906 20:38:12.756622       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-l4k4v" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0906 20:38:12.756670       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0906 20:38:12.757084       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid 11685f17-8ce3-48f2-8b3b-2ad3f5c62658, event type delete
I0906 20:38:12.785290       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5194, estimate: 0, errors: <nil>
I0906 20:38:12.785558       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (3.2µs)
I0906 20:38:12.793561       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5194" (288.352413ms)
I0906 20:38:13.386428       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="70.702µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58958" resp=200
... skipping 83 lines ...
I0906 20:38:24.958716       1 disruption.go:418] No matching pdb for pod "azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx"
I0906 20:38:24.964366       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx"
I0906 20:38:24.965398       1 replica_set.go:443] Pod azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx updated, objectMeta {Name:azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx GenerateName:azuredisk-volume-tester-mxzv6-6b746d8f96- Namespace:azuredisk-1353 SelfLink: UID:1d251771-b584-4b26-91fe-82eb9afe07a9 ResourceVersion:2438 Generation:0 CreationTimestamp:2022-09-06 20:38:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:6b746d8f96] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-mxzv6-6b746d8f96 UID:7c68a421-898f-494e-a925-b10e513581c4 Controller:0xc002520357 BlockOwnerDeletion:0xc002520358}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:38:24 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c68a421-898f-494e-a925-b10e513581c4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx GenerateName:azuredisk-volume-tester-mxzv6-6b746d8f96- Namespace:azuredisk-1353 SelfLink: UID:1d251771-b584-4b26-91fe-82eb9afe07a9 ResourceVersion:2439 Generation:0 CreationTimestamp:2022-09-06 20:38:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:6b746d8f96] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-mxzv6-6b746d8f96 UID:7c68a421-898f-494e-a925-b10e513581c4 Controller:0xc00271bbd7 BlockOwnerDeletion:0xc00271bbd8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:38:24 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c68a421-898f-494e-a925-b10e513581c4\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]}.
I0906 20:38:24.965508       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx"
I0906 20:38:24.965822       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx, PodDisruptionBudget controller will avoid syncing.
I0906 20:38:24.965927       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx"
W0906 20:38:24.970119       1 reconciler.go:385] Multi-Attach error for volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5") from node "capz-lcwcec-md-0-mcztr" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96-zncdm on node capz-lcwcec-md-0-jzv54
I0906 20:38:24.970701       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96-5bhjx" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5\" Volume is already used by pod(s) azuredisk-volume-tester-mxzv6-6b746d8f96-zncdm"
I0906 20:38:24.974325       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96" (27.835822ms)
I0906 20:38:24.974521       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96", timestamp:time.Time{wall:0xc0be09d03871553a, ext:833388918578, loc:(*time.Location)(0x751a1a0)}}
I0906 20:38:24.974710       1 controller_utils.go:938] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96-zncdm in state Running, deletion time 2022-09-06 20:38:54 +0000 UTC
I0906 20:38:24.974895       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 1->0, availableReplicas 1->0, sequence No: 1->1
I0906 20:38:24.975467       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-mxzv6-6b746d8f96"
I0906 20:38:24.975743       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-mxzv6" startTime="2022-09-06 20:38:24.975681924 +0000 UTC m=+833.417649120"
... skipping 431 lines ...
I0906 20:40:02.158643       1 pv_controller.go:1108] reclaimVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: policy is Delete
I0906 20:40:02.158655       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]]
I0906 20:40:02.158662       1 pv_controller.go:1763] operation "delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]" is already running, skipping
I0906 20:40:02.158677       1 pv_protection_controller.go:205] Got event on PV pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5
I0906 20:40:02.160151       1 pv_controller.go:1340] isVolumeReleased[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is released
I0906 20:40:02.160170       1 pv_controller.go:1404] doDeleteVolume [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]
I0906 20:40:02.181786       1 pv_controller.go:1259] deletion of volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:40:02.181808       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: set phase Failed
I0906 20:40:02.181817       1 pv_controller.go:858] updating PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: set phase Failed
I0906 20:40:02.185249       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" with version 2618
I0906 20:40:02.185430       1 pv_controller.go:879] volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" entered phase "Failed"
I0906 20:40:02.185561       1 pv_controller.go:901] volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
E0906 20:40:02.185682       1 goroutinemap.go:150] Operation for "delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]" failed. No retries permitted until 2022-09-06 20:40:02.685655257 +0000 UTC m=+931.127622453 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted
I0906 20:40:02.185275       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" with version 2618
I0906 20:40:02.185964       1 event.go:291] "Event occurred" object="pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-mcztr), could not be deleted"
I0906 20:40:02.186127       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: phase: Failed, bound to: "azuredisk-1353/pvc-vx77l (uid: 5bb07e9f-f5b7-41c7-b6cc-acba53329de5)", boundByController: true
I0906 20:40:02.186333       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is bound to claim azuredisk-1353/pvc-vx77l
I0906 20:40:02.185285       1 pv_protection_controller.go:205] Got event on PV pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5
I0906 20:40:02.186696       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: claim azuredisk-1353/pvc-vx77l not found
I0906 20:40:02.186884       1 pv_controller.go:1108] reclaimVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: policy is Delete
I0906 20:40:02.187068       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]]
I0906 20:40:02.187328       1 pv_controller.go:1765] operation "delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]" postponed due to exponential backoff
... skipping 14 lines ...
I0906 20:40:09.980043       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-mcztr ReadyCondition updated. Updating timestamp.
I0906 20:40:13.386088       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="91.202µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39510" resp=200
I0906 20:40:14.748449       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:40:14.821943       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:40:14.929652       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:40:14.929733       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" with version 2618
I0906 20:40:14.929791       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: phase: Failed, bound to: "azuredisk-1353/pvc-vx77l (uid: 5bb07e9f-f5b7-41c7-b6cc-acba53329de5)", boundByController: true
I0906 20:40:14.929830       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is bound to claim azuredisk-1353/pvc-vx77l
I0906 20:40:14.929856       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: claim azuredisk-1353/pvc-vx77l not found
I0906 20:40:14.929889       1 pv_controller.go:1108] reclaimVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: policy is Delete
I0906 20:40:14.929909       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]]
I0906 20:40:14.929953       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5] started
I0906 20:40:14.939617       1 pv_controller.go:1340] isVolumeReleased[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is released
I0906 20:40:14.939640       1 pv_controller.go:1404] doDeleteVolume [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]
I0906 20:40:14.939674       1 pv_controller.go:1259] deletion of volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) since it's in attaching or detaching state
I0906 20:40:14.939687       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: set phase Failed
I0906 20:40:14.939697       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: phase Failed already set
E0906 20:40:14.939725       1 goroutinemap.go:150] Operation for "delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]" failed. No retries permitted until 2022-09-06 20:40:15.939705307 +0000 UTC m=+944.381672403 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) since it's in attaching or detaching state
I0906 20:40:15.694512       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:40:22.513553       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-mcztr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) returned with <nil>
I0906 20:40:22.513608       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5) succeeded
I0906 20:40:22.513620       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5 was detached from node:capz-lcwcec-md-0-mcztr
I0906 20:40:22.513645       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5") on node "capz-lcwcec-md-0-mcztr" 
I0906 20:40:23.385922       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="102.602µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36340" resp=200
I0906 20:40:23.876741       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:40:24.839958       1 gc_controller.go:161] GC'ing orphaned
I0906 20:40:24.840170       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:40:29.822665       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:40:29.930074       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:40:29.930133       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" with version 2618
I0906 20:40:29.930170       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: phase: Failed, bound to: "azuredisk-1353/pvc-vx77l (uid: 5bb07e9f-f5b7-41c7-b6cc-acba53329de5)", boundByController: true
I0906 20:40:29.930211       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is bound to claim azuredisk-1353/pvc-vx77l
I0906 20:40:29.930262       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: claim azuredisk-1353/pvc-vx77l not found
I0906 20:40:29.930276       1 pv_controller.go:1108] reclaimVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: policy is Delete
I0906 20:40:29.930294       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]]
I0906 20:40:29.930348       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5] started
I0906 20:40:29.935893       1 pv_controller.go:1340] isVolumeReleased[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is released
I0906 20:40:29.935914       1 pv_controller.go:1404] doDeleteVolume [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]
I0906 20:40:33.385909       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="91.202µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47962" resp=200
I0906 20:40:35.092408       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5
I0906 20:40:35.092441       1 pv_controller.go:1435] volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" deleted
I0906 20:40:35.092453       1 pv_controller.go:1283] deleteVolumeOperation [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: success
I0906 20:40:35.104699       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5" with version 2668
I0906 20:40:35.104740       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: phase: Failed, bound to: "azuredisk-1353/pvc-vx77l (uid: 5bb07e9f-f5b7-41c7-b6cc-acba53329de5)", boundByController: true
I0906 20:40:35.104766       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: volume is bound to claim azuredisk-1353/pvc-vx77l
I0906 20:40:35.104783       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: claim azuredisk-1353/pvc-vx77l not found
I0906 20:40:35.104791       1 pv_controller.go:1108] reclaimVolume[pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5]: policy is Delete
I0906 20:40:35.104805       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5[69e7ff61-82b8-407b-9311-0b0af34d1672]]
I0906 20:40:35.104830       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5] started
I0906 20:40:35.105026       1 pv_protection_controller.go:205] Got event on PV pvc-5bb07e9f-f5b7-41c7-b6cc-acba53329de5
... skipping 144 lines ...
I0906 20:40:40.824817       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-mxzv6-6b746d8f96.17125fedf8bc1a57, uid cb83b3f5-c7e6-4e1c-bb8b-1fd87e709070, event type delete
I0906 20:40:40.832078       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-mxzv6-6b746d8f96.17125ff1e30e1d5d, uid 5b23bc2f-a0cc-4783-af97-b22112305c35, event type delete
I0906 20:40:40.837099       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-mxzv6.17125fedf8402919, uid 8ddd1aea-abf7-4a8a-8d70-0cf628291472, event type delete
I0906 20:40:40.840181       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-vx77l.17125fedf5e3dea0, uid 0d55961a-0e28-4f75-a81f-42e9ef6b53a0, event type delete
I0906 20:40:40.843847       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-vx77l.17125fee856d9707, uid 213dcc19-3cd2-4d0c-873b-40cece099edb, event type delete
I0906 20:40:40.875818       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-5vp2l, uid de46df2b-8d73-4098-83d1-ca74667acf53, event type delete
E0906 20:40:40.887283       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-j6vwc" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0906 20:40:40.887512       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (3.1µs)
I0906 20:40:40.887533       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1353/default), service account deleted, removing tokens
I0906 20:40:40.887553       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1353, name default, uid 25fcaf49-3176-406b-8141-a1f974decdc3, event type delete
I0906 20:40:40.918689       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1353, name kube-root-ca.crt, uid c3227fda-1f46-490e-8410-6f62418ae06d, event type delete
I0906 20:40:40.920431       1 publisher.go:186] Finished syncing namespace "azuredisk-1353" (1.983148ms)
I0906 20:40:40.953821       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1353, estimate: 0, errors: <nil>
... skipping 301 lines ...
I0906 20:40:55.732785       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-nm9hk] status: phase Bound already set
I0906 20:40:55.732794       1 pv_controller.go:1038] volume "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e" bound to claim "azuredisk-59/pvc-nm9hk"
I0906 20:40:55.732809       1 pv_controller.go:1039] volume "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-nm9hk (uid: 3c93a50e-4996-4915-bf49-e1b256d3ac9e)", boundByController: true
I0906 20:40:55.732821       1 pv_controller.go:1040] claim "azuredisk-59/pvc-nm9hk" status after binding: phase: Bound, bound to: "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e", bindCompleted: true, boundByController: true
I0906 20:40:55.750866       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4538
I0906 20:40:55.800023       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4538, name default-token-m5ktd, uid ef40fecf-bff6-45c7-b892-251472f9d8e7, event type delete
E0906 20:40:55.816383       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4538/default: secrets "default-token-5mldb" is forbidden: unable to create new content in namespace azuredisk-4538 because it is being terminated
I0906 20:40:55.823045       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4538/default), service account deleted, removing tokens
I0906 20:40:55.823244       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4538" (2.9µs)
I0906 20:40:55.823264       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4538, name default, uid cd1c0f02-421c-4561-b4e7-cdc912fd6d56, event type delete
I0906 20:40:55.828085       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4538, name kube-root-ca.crt, uid faa63a4f-0b0e-42aa-9aa2-2261121e47ff, event type delete
I0906 20:40:55.829509       1 publisher.go:186] Finished syncing namespace "azuredisk-4538" (1.662937ms)
I0906 20:40:55.915264       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4538, name pvc-qnqjx.1712601104ae22d5, uid 09cbddae-07f4-4550-8788-fa8a2e51f51c, event type delete
... skipping 16 lines ...
I0906 20:40:56.430393       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25" to node "capz-lcwcec-md-0-jzv54".
I0906 20:40:56.430462       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" to node "capz-lcwcec-md-0-jzv54".
I0906 20:40:56.430477       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e" to node "capz-lcwcec-md-0-jzv54".
I0906 20:40:56.455246       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e" lun 0 to node "capz-lcwcec-md-0-jzv54".
I0906 20:40:56.455282       1 azure_controller_standard.go:93] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - attach disk(capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e) with DiskEncryptionSetID()
I0906 20:40:56.474356       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-f9r5j, uid 3e5561ec-6b7c-457f-84c1-0d1873f632fe, event type delete
E0906 20:40:56.489725       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-kjlrr" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0906 20:40:56.493902       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0906 20:40:56.494108       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.1µs)
I0906 20:40:56.494249       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid 4d6acef8-4995-4f56-8bf5-b92ccf6e22d1, event type delete
I0906 20:40:56.514218       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.2µs)
I0906 20:40:56.514517       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8266, estimate: 0, errors: <nil>
I0906 20:40:56.523706       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8266" (172.132502ms)
... skipping 6 lines ...
I0906 20:40:56.988704       1 publisher.go:186] Finished syncing namespace "azuredisk-4376" (2.980768ms)
I0906 20:40:57.058625       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (3.1µs)
I0906 20:40:57.060054       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4376, estimate: 0, errors: <nil>
I0906 20:40:57.067104       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4376" (148.925576ms)
I0906 20:40:57.515751       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7996
I0906 20:40:57.533424       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7996, name default-token-4mb7x, uid ff138505-dab2-4424-8b3c-4c0b5e8b6dae, event type delete
E0906 20:40:57.547289       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7996/default: secrets "default-token-vpltx" is forbidden: unable to create new content in namespace azuredisk-7996 because it is being terminated
I0906 20:40:57.618450       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7996/default), service account deleted, removing tokens
I0906 20:40:57.618692       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (3.001µs)
I0906 20:40:57.618714       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7996, name default, uid 48f8e3e0-a565-49ee-9a4d-aa895548fff0, event type delete
I0906 20:40:57.635252       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7996, name kube-root-ca.crt, uid bd2cd4d2-4b5d-47eb-8943-564e8dddb6b0, event type delete
I0906 20:40:57.637586       1 publisher.go:186] Finished syncing namespace "azuredisk-7996" (2.507057ms)
I0906 20:40:57.665541       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7996, estimate: 0, errors: <nil>
... skipping 368 lines ...
I0906 20:41:34.274755       1 pv_controller.go:1108] reclaimVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: policy is Delete
I0906 20:41:34.274831       1 pv_controller.go:1752] scheduleOperation[delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]]
I0906 20:41:34.274927       1 pv_controller.go:1763] operation "delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]" is already running, skipping
I0906 20:41:34.275014       1 pv_controller.go:1231] deleteVolumeOperation [pvc-611e058a-d277-4129-a4fa-fbaa4e14a474] started
I0906 20:41:34.276568       1 pv_controller.go:1340] isVolumeReleased[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is released
I0906 20:41:34.276583       1 pv_controller.go:1404] doDeleteVolume [pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]
I0906 20:41:34.308631       1 pv_controller.go:1259] deletion of volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:41:34.308656       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: set phase Failed
I0906 20:41:34.308669       1 pv_controller.go:858] updating PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: set phase Failed
I0906 20:41:34.312696       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" with version 2904
I0906 20:41:34.312847       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase: Failed, bound to: "azuredisk-59/pvc-79h52 (uid: 611e058a-d277-4129-a4fa-fbaa4e14a474)", boundByController: true
I0906 20:41:34.312952       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is bound to claim azuredisk-59/pvc-79h52
I0906 20:41:34.313049       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: claim azuredisk-59/pvc-79h52 not found
I0906 20:41:34.313117       1 pv_controller.go:1108] reclaimVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: policy is Delete
I0906 20:41:34.313203       1 pv_controller.go:1752] scheduleOperation[delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]]
I0906 20:41:34.313216       1 pv_controller.go:1763] operation "delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]" is already running, skipping
I0906 20:41:34.312698       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" with version 2904
I0906 20:41:34.313239       1 pv_controller.go:879] volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" entered phase "Failed"
I0906 20:41:34.313290       1 pv_controller.go:901] volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:41:34.312714       1 pv_protection_controller.go:205] Got event on PV pvc-611e058a-d277-4129-a4fa-fbaa4e14a474
E0906 20:41:34.313369       1 goroutinemap.go:150] Operation for "delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]" failed. No retries permitted until 2022-09-06 20:41:34.81331283 +0000 UTC m=+1023.255279926 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:41:34.313592       1 event.go:291] "Event occurred" object="pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted"
I0906 20:41:38.824024       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 0 items received
I0906 20:41:39.804960       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
I0906 20:41:39.805155       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 to the node "capz-lcwcec-md-0-jzv54" mounted false
I0906 20:41:39.805175       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25 to the node "capz-lcwcec-md-0-jzv54" mounted false
I0906 20:41:39.805193       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e to the node "capz-lcwcec-md-0-jzv54" mounted false
... skipping 75 lines ...
I0906 20:41:44.932968       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: volume is bound to claim azuredisk-59/pvc-xckch
I0906 20:41:44.932986       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: claim azuredisk-59/pvc-xckch found: phase: Bound, bound to: "pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25", bindCompleted: true, boundByController: true
I0906 20:41:44.932999       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: all is bound
I0906 20:41:44.933006       1 pv_controller.go:858] updating PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: set phase Bound
I0906 20:41:44.933015       1 pv_controller.go:861] updating PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: phase Bound already set
I0906 20:41:44.933029       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" with version 2904
I0906 20:41:44.933047       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase: Failed, bound to: "azuredisk-59/pvc-79h52 (uid: 611e058a-d277-4129-a4fa-fbaa4e14a474)", boundByController: true
I0906 20:41:44.933070       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is bound to claim azuredisk-59/pvc-79h52
I0906 20:41:44.933088       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: claim azuredisk-59/pvc-79h52 not found
I0906 20:41:44.933094       1 pv_controller.go:1108] reclaimVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: policy is Delete
I0906 20:41:44.933109       1 pv_controller.go:1752] scheduleOperation[delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]]
I0906 20:41:44.933139       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e" with version 2796
I0906 20:41:44.933157       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: phase: Bound, bound to: "azuredisk-59/pvc-nm9hk (uid: 3c93a50e-4996-4915-bf49-e1b256d3ac9e)", boundByController: true
... skipping 2 lines ...
I0906 20:41:44.933489       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: claim azuredisk-59/pvc-nm9hk found: phase: Bound, bound to: "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e", bindCompleted: true, boundByController: true
I0906 20:41:44.933500       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: all is bound
I0906 20:41:44.933508       1 pv_controller.go:858] updating PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: set phase Bound
I0906 20:41:44.933515       1 pv_controller.go:861] updating PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: phase Bound already set
I0906 20:41:44.936628       1 pv_controller.go:1340] isVolumeReleased[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is released
I0906 20:41:44.936646       1 pv_controller.go:1404] doDeleteVolume [pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]
I0906 20:41:44.958362       1 pv_controller.go:1259] deletion of volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:41:44.958388       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: set phase Failed
I0906 20:41:44.958397       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase Failed already set
E0906 20:41:44.958423       1 goroutinemap.go:150] Operation for "delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]" failed. No retries permitted until 2022-09-06 20:41:45.958406826 +0000 UTC m=+1034.400374022 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:41:45.766343       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:41:49.766361       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 0 items received
I0906 20:41:52.737501       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0906 20:41:53.385793       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="74.702µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51206" resp=200
I0906 20:41:55.354482       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e) returned with <nil>
I0906 20:41:55.354532       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e) succeeded
... skipping 8 lines ...
I0906 20:41:59.932667       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: volume is bound to claim azuredisk-59/pvc-xckch
I0906 20:41:59.932682       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: claim azuredisk-59/pvc-xckch found: phase: Bound, bound to: "pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25", bindCompleted: true, boundByController: true
I0906 20:41:59.932695       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: all is bound
I0906 20:41:59.932705       1 pv_controller.go:858] updating PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: set phase Bound
I0906 20:41:59.932712       1 pv_controller.go:861] updating PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: phase Bound already set
I0906 20:41:59.932725       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" with version 2904
I0906 20:41:59.932742       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase: Failed, bound to: "azuredisk-59/pvc-79h52 (uid: 611e058a-d277-4129-a4fa-fbaa4e14a474)", boundByController: true
I0906 20:41:59.932760       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is bound to claim azuredisk-59/pvc-79h52
I0906 20:41:59.932776       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: claim azuredisk-59/pvc-79h52 not found
I0906 20:41:59.932785       1 pv_controller.go:1108] reclaimVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: policy is Delete
I0906 20:41:59.932789       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-xckch" with version 2788
I0906 20:41:59.932798       1 pv_controller.go:1752] scheduleOperation[delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]]
I0906 20:41:59.932807       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-59/pvc-xckch]: phase: Bound, bound to: "pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25", bindCompleted: true, boundByController: true
... skipping 34 lines ...
I0906 20:41:59.933281       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: claim azuredisk-59/pvc-nm9hk found: phase: Bound, bound to: "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e", bindCompleted: true, boundByController: true
I0906 20:41:59.933294       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: all is bound
I0906 20:41:59.933299       1 pv_controller.go:858] updating PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: set phase Bound
I0906 20:41:59.933306       1 pv_controller.go:861] updating PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: phase Bound already set
I0906 20:41:59.938175       1 pv_controller.go:1340] isVolumeReleased[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is released
I0906 20:41:59.938310       1 pv_controller.go:1404] doDeleteVolume [pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]
I0906 20:41:59.938352       1 pv_controller.go:1259] deletion of volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) since it's in attaching or detaching state
I0906 20:41:59.938389       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: set phase Failed
I0906 20:41:59.938433       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase Failed already set
E0906 20:41:59.938522       1 goroutinemap.go:150] Operation for "delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]" failed. No retries permitted until 2022-09-06 20:42:01.938505534 +0000 UTC m=+1050.380472630 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) since it's in attaching or detaching state
I0906 20:42:03.385810       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.302µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36780" resp=200
I0906 20:42:04.843498       1 gc_controller.go:161] GC'ing orphaned
I0906 20:42:04.843712       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:42:10.778258       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) returned with <nil>
I0906 20:42:10.778381       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474) succeeded
I0906 20:42:10.778427       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474 was detached from node:capz-lcwcec-md-0-jzv54
... skipping 10 lines ...
I0906 20:42:14.933099       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: volume is bound to claim azuredisk-59/pvc-xckch
I0906 20:42:14.933121       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: claim azuredisk-59/pvc-xckch found: phase: Bound, bound to: "pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25", bindCompleted: true, boundByController: true
I0906 20:42:14.933155       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: all is bound
I0906 20:42:14.933165       1 pv_controller.go:858] updating PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: set phase Bound
I0906 20:42:14.933174       1 pv_controller.go:861] updating PersistentVolume[pvc-38d514cd-a10b-4901-a12e-17d76f3d7b25]: phase Bound already set
I0906 20:42:14.933188       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" with version 2904
I0906 20:42:14.933206       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase: Failed, bound to: "azuredisk-59/pvc-79h52 (uid: 611e058a-d277-4129-a4fa-fbaa4e14a474)", boundByController: true
I0906 20:42:14.933261       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is bound to claim azuredisk-59/pvc-79h52
I0906 20:42:14.933280       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: claim azuredisk-59/pvc-79h52 not found
I0906 20:42:14.933287       1 pv_controller.go:1108] reclaimVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: policy is Delete
I0906 20:42:14.933302       1 pv_controller.go:1752] scheduleOperation[delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]]
I0906 20:42:14.933337       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e" with version 2796
I0906 20:42:14.933362       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3c93a50e-4996-4915-bf49-e1b256d3ac9e]: phase: Bound, bound to: "azuredisk-59/pvc-nm9hk (uid: 3c93a50e-4996-4915-bf49-e1b256d3ac9e)", boundByController: true
... skipping 39 lines ...
I0906 20:42:14.938920       1 pv_controller.go:1404] doDeleteVolume [pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]
I0906 20:42:15.780773       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:42:20.120681       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474
I0906 20:42:20.120733       1 pv_controller.go:1435] volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" deleted
I0906 20:42:20.120748       1 pv_controller.go:1283] deleteVolumeOperation [pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: success
I0906 20:42:20.125627       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-611e058a-d277-4129-a4fa-fbaa4e14a474" with version 2978
I0906 20:42:20.125778       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: phase: Failed, bound to: "azuredisk-59/pvc-79h52 (uid: 611e058a-d277-4129-a4fa-fbaa4e14a474)", boundByController: true
I0906 20:42:20.125854       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: volume is bound to claim azuredisk-59/pvc-79h52
I0906 20:42:20.125949       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: claim azuredisk-59/pvc-79h52 not found
I0906 20:42:20.125963       1 pv_controller.go:1108] reclaimVolume[pvc-611e058a-d277-4129-a4fa-fbaa4e14a474]: policy is Delete
I0906 20:42:20.126048       1 pv_controller.go:1752] scheduleOperation[delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]]
I0906 20:42:20.126103       1 pv_controller.go:1763] operation "delete-pvc-611e058a-d277-4129-a4fa-fbaa4e14a474[c88982fd-d4f5-4b95-a925-2ef2f237adef]" is already running, skipping
I0906 20:42:20.126142       1 pv_protection_controller.go:205] Got event on PV pvc-611e058a-d277-4129-a4fa-fbaa4e14a474
... skipping 383 lines ...
I0906 20:42:50.458469       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-79h52.17126014fb11075f, uid 5434f370-36ff-490f-9b8d-c045313616ce, event type delete
I0906 20:42:50.461465       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-nm9hk.17126014644244cd, uid 8d77efd6-0a69-4112-93d3-b8a20a49723b, event type delete
I0906 20:42:50.466707       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-nm9hk.17126014fbe68a6c, uid b7a3d1fb-cc08-4054-a1a0-10a70738e968, event type delete
I0906 20:42:50.473486       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-xckch.171260145fc2b342, uid 949bcec1-018a-4150-bdf0-cd82a1cfb0bf, event type delete
I0906 20:42:50.478530       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name pvc-xckch.17126014f4a04332, uid cdf4091f-2804-4642-8d8a-c425627cd72d, event type delete
I0906 20:42:50.497519       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-59, name default-token-k8xqd, uid 726c7a91-ffc0-4851-bb8e-bdb9cf9a7a61, event type delete
E0906 20:42:50.512007       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-59/default: secrets "default-token-8llzc" is forbidden: unable to create new content in namespace azuredisk-59 because it is being terminated
I0906 20:42:50.542134       1 tokens_controller.go:252] syncServiceAccount(azuredisk-59/default), service account deleted, removing tokens
I0906 20:42:50.542627       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-59, name default, uid 6880a194-5160-4a80-b29f-186eae8f94d4, event type delete
I0906 20:42:50.543165       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-59" (2.5µs)
I0906 20:42:50.592214       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-59, name kube-root-ca.crt, uid 52062133-5cf0-4872-b3cd-586efd44b30f, event type delete
I0906 20:42:50.593399       1 publisher.go:186] Finished syncing namespace "azuredisk-59" (1.371531ms)
I0906 20:42:50.612360       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-59" (1.9µs)
... skipping 164 lines ...
I0906 20:43:10.807111       1 pv_controller.go:1108] reclaimVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: policy is Delete
I0906 20:43:10.807129       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]]
I0906 20:43:10.807266       1 pv_controller.go:1763] operation "delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]" is already running, skipping
I0906 20:43:10.806423       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a9103ad-679b-4429-93dd-f322a37a2855] started
I0906 20:43:10.809029       1 pv_controller.go:1340] isVolumeReleased[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is released
I0906 20:43:10.809224       1 pv_controller.go:1404] doDeleteVolume [pvc-1a9103ad-679b-4429-93dd-f322a37a2855]
I0906 20:43:10.833339       1 pv_controller.go:1259] deletion of volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:43:10.833419       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: set phase Failed
I0906 20:43:10.833544       1 pv_controller.go:858] updating PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: set phase Failed
I0906 20:43:10.837094       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" with version 3137
I0906 20:43:10.837125       1 pv_controller.go:879] volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" entered phase "Failed"
I0906 20:43:10.837135       1 pv_controller.go:901] volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
E0906 20:43:10.837172       1 goroutinemap.go:150] Operation for "delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]" failed. No retries permitted until 2022-09-06 20:43:11.337152638 +0000 UTC m=+1119.779119734 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:43:10.837478       1 event.go:291] "Event occurred" object="pvc-1a9103ad-679b-4429-93dd-f322a37a2855" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted"
I0906 20:43:10.837518       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" with version 3137
I0906 20:43:10.837547       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase: Failed, bound to: "azuredisk-2546/pvc-b7x95 (uid: 1a9103ad-679b-4429-93dd-f322a37a2855)", boundByController: true
I0906 20:43:10.837580       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is bound to claim azuredisk-2546/pvc-b7x95
I0906 20:43:10.837602       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: claim azuredisk-2546/pvc-b7x95 not found
I0906 20:43:10.837612       1 pv_controller.go:1108] reclaimVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: policy is Delete
I0906 20:43:10.837631       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]]
I0906 20:43:10.837642       1 pv_controller.go:1765] operation "delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]" postponed due to exponential backoff
I0906 20:43:10.837665       1 pv_protection_controller.go:205] Got event on PV pvc-1a9103ad-679b-4429-93dd-f322a37a2855
... skipping 9 lines ...
I0906 20:43:14.935546       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: claim azuredisk-2546/pvc-zcnkp found: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
I0906 20:43:14.935558       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-zcnkp]: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
I0906 20:43:14.935559       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: all is bound
I0906 20:43:14.935568       1 pv_controller.go:858] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: set phase Bound
I0906 20:43:14.935585       1 pv_controller.go:861] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: phase Bound already set
I0906 20:43:14.935598       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" with version 3137
I0906 20:43:14.935616       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase: Failed, bound to: "azuredisk-2546/pvc-b7x95 (uid: 1a9103ad-679b-4429-93dd-f322a37a2855)", boundByController: true
I0906 20:43:14.935635       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is bound to claim azuredisk-2546/pvc-b7x95
I0906 20:43:14.935585       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-zcnkp]: volume "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e" found: phase: Bound, bound to: "azuredisk-2546/pvc-zcnkp (uid: 9db969cb-fefe-45c6-9925-4e52e5c3dd7e)", boundByController: true
I0906 20:43:14.935668       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-zcnkp]: claim is already correctly bound
I0906 20:43:14.935678       1 pv_controller.go:1012] binding volume "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e" to claim "azuredisk-2546/pvc-zcnkp"
I0906 20:43:14.935687       1 pv_controller.go:910] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: binding to "azuredisk-2546/pvc-zcnkp"
I0906 20:43:14.935720       1 pv_controller.go:922] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: already bound to "azuredisk-2546/pvc-zcnkp"
... skipping 9 lines ...
I0906 20:43:14.935793       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-zcnkp] status: phase Bound already set
I0906 20:43:14.935803       1 pv_controller.go:1038] volume "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e" bound to claim "azuredisk-2546/pvc-zcnkp"
I0906 20:43:14.935817       1 pv_controller.go:1039] volume "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-zcnkp (uid: 9db969cb-fefe-45c6-9925-4e52e5c3dd7e)", boundByController: true
I0906 20:43:14.935829       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-zcnkp" status after binding: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
I0906 20:43:14.941966       1 pv_controller.go:1340] isVolumeReleased[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is released
I0906 20:43:14.941987       1 pv_controller.go:1404] doDeleteVolume [pvc-1a9103ad-679b-4429-93dd-f322a37a2855]
I0906 20:43:14.965022       1 pv_controller.go:1259] deletion of volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:43:14.965045       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: set phase Failed
I0906 20:43:14.965054       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase Failed already set
E0906 20:43:14.965081       1 goroutinemap.go:150] Operation for "delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]" failed. No retries permitted until 2022-09-06 20:43:15.965062885 +0000 UTC m=+1124.407029981 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/virtualMachines/capz-lcwcec-md-0-jzv54), could not be deleted
I0906 20:43:15.810719       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:43:18.899078       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:43:19.916804       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
I0906 20:43:19.916845       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855 to the node "capz-lcwcec-md-0-jzv54" mounted false
I0906 20:43:19.916854       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e to the node "capz-lcwcec-md-0-jzv54" mounted false
I0906 20:43:19.965311       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-jzv54"
... skipping 25 lines ...
I0906 20:43:29.936623       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: volume is bound to claim azuredisk-2546/pvc-zcnkp
I0906 20:43:29.936644       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: claim azuredisk-2546/pvc-zcnkp found: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
I0906 20:43:29.936663       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: all is bound
I0906 20:43:29.936673       1 pv_controller.go:858] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: set phase Bound
I0906 20:43:29.936685       1 pv_controller.go:861] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: phase Bound already set
I0906 20:43:29.936700       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" with version 3137
I0906 20:43:29.936721       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase: Failed, bound to: "azuredisk-2546/pvc-b7x95 (uid: 1a9103ad-679b-4429-93dd-f322a37a2855)", boundByController: true
I0906 20:43:29.936745       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is bound to claim azuredisk-2546/pvc-b7x95
I0906 20:43:29.936765       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: claim azuredisk-2546/pvc-b7x95 not found
I0906 20:43:29.936773       1 pv_controller.go:1108] reclaimVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: policy is Delete
I0906 20:43:29.936791       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]]
I0906 20:43:29.936822       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a9103ad-679b-4429-93dd-f322a37a2855] started
I0906 20:43:29.937155       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-zcnkp]: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
... skipping 10 lines ...
I0906 20:43:29.938670       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-zcnkp] status: phase Bound already set
I0906 20:43:29.938799       1 pv_controller.go:1038] volume "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e" bound to claim "azuredisk-2546/pvc-zcnkp"
I0906 20:43:29.938940       1 pv_controller.go:1039] volume "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-zcnkp (uid: 9db969cb-fefe-45c6-9925-4e52e5c3dd7e)", boundByController: true
I0906 20:43:29.939073       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-zcnkp" status after binding: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
I0906 20:43:29.945053       1 pv_controller.go:1340] isVolumeReleased[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is released
I0906 20:43:29.945305       1 pv_controller.go:1404] doDeleteVolume [pvc-1a9103ad-679b-4429-93dd-f322a37a2855]
I0906 20:43:29.945483       1 pv_controller.go:1259] deletion of volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) since it's in attaching or detaching state
I0906 20:43:29.945645       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: set phase Failed
I0906 20:43:29.945795       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase Failed already set
E0906 20:43:29.945915       1 goroutinemap.go:150] Operation for "delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]" failed. No retries permitted until 2022-09-06 20:43:31.945856973 +0000 UTC m=+1140.387824069 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) since it's in attaching or detaching state
I0906 20:43:30.348348       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0906 20:43:33.385508       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.202µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36904" resp=200
I0906 20:43:33.733829       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 11 items received
I0906 20:43:35.513502       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) returned with <nil>
I0906 20:43:35.513550       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855) succeeded
I0906 20:43:35.513564       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855 was detached from node:capz-lcwcec-md-0-jzv54
... skipping 12 lines ...
I0906 20:43:44.936706       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: volume is bound to claim azuredisk-2546/pvc-zcnkp
I0906 20:43:44.936748       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: claim azuredisk-2546/pvc-zcnkp found: phase: Bound, bound to: "pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e", bindCompleted: true, boundByController: true
I0906 20:43:44.936762       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: all is bound
I0906 20:43:44.936771       1 pv_controller.go:858] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: set phase Bound
I0906 20:43:44.936780       1 pv_controller.go:861] updating PersistentVolume[pvc-9db969cb-fefe-45c6-9925-4e52e5c3dd7e]: phase Bound already set
I0906 20:43:44.936817       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" with version 3137
I0906 20:43:44.936842       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase: Failed, bound to: "azuredisk-2546/pvc-b7x95 (uid: 1a9103ad-679b-4429-93dd-f322a37a2855)", boundByController: true
I0906 20:43:44.936890       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is bound to claim azuredisk-2546/pvc-b7x95
I0906 20:43:44.936909       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: claim azuredisk-2546/pvc-b7x95 not found
I0906 20:43:44.936917       1 pv_controller.go:1108] reclaimVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: policy is Delete
I0906 20:43:44.936997       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]]
I0906 20:43:44.937028       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a9103ad-679b-4429-93dd-f322a37a2855] started
I0906 20:43:44.937251       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-zcnkp" with version 3053
... skipping 18 lines ...
I0906 20:43:47.762777       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 72 items received
I0906 20:43:48.750017       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0906 20:43:50.141577       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-1a9103ad-679b-4429-93dd-f322a37a2855
I0906 20:43:50.141717       1 pv_controller.go:1435] volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" deleted
I0906 20:43:50.141789       1 pv_controller.go:1283] deleteVolumeOperation [pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: success
I0906 20:43:50.149501       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a9103ad-679b-4429-93dd-f322a37a2855" with version 3196
I0906 20:43:50.149544       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: phase: Failed, bound to: "azuredisk-2546/pvc-b7x95 (uid: 1a9103ad-679b-4429-93dd-f322a37a2855)", boundByController: true
I0906 20:43:50.149802       1 pv_protection_controller.go:205] Got event on PV pvc-1a9103ad-679b-4429-93dd-f322a37a2855
I0906 20:43:50.149829       1 pv_protection_controller.go:125] Processing PV pvc-1a9103ad-679b-4429-93dd-f322a37a2855
I0906 20:43:50.150291       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: volume is bound to claim azuredisk-2546/pvc-b7x95
I0906 20:43:50.150319       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: claim azuredisk-2546/pvc-b7x95 not found
I0906 20:43:50.150449       1 pv_controller.go:1108] reclaimVolume[pvc-1a9103ad-679b-4429-93dd-f322a37a2855]: policy is Delete
I0906 20:43:50.150472       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a9103ad-679b-4429-93dd-f322a37a2855[c8a83ab2-f116-45b3-9345-1f64358ddbdf]]
... skipping 354 lines ...
I0906 20:44:06.603741       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2546, name kube-root-ca.crt, uid 7cab35cc-f2c9-47a3-a8e5-4920da62cd61, event type delete
I0906 20:44:06.608260       1 publisher.go:186] Finished syncing namespace "azuredisk-2546" (4.741905ms)
I0906 20:44:06.609284       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-6121fafc-fc15-4968-96bc-27342bbd0068" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6121fafc-fc15-4968-96bc-27342bbd0068") from node "capz-lcwcec-md-0-jzv54" 
I0906 20:44:06.609360       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2") from node "capz-lcwcec-md-0-jzv54" 
I0906 20:44:06.609440       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051") from node "capz-lcwcec-md-0-jzv54" 
I0906 20:44:06.612288       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2546, name default-token-pkb8n, uid df12bd87-646f-48d3-a8b0-b28820334c24, event type delete
E0906 20:44:06.627429       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2546/default: secrets "default-token-d8vms" is forbidden: unable to create new content in namespace azuredisk-2546 because it is being terminated
I0906 20:44:06.657553       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-b8rww.1712602f650f9be0, uid 09279b5c-2e20-4cb8-a3c7-0eea9d8ace0f, event type delete
I0906 20:44:06.663268       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-b8rww.17126030bd321d92, uid 14806194-9443-485b-ba5a-9e1f45ef9ea9, event type delete
I0906 20:44:06.664852       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" to node "capz-lcwcec-md-0-jzv54".
I0906 20:44:06.665017       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-6121fafc-fc15-4968-96bc-27342bbd0068. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6121fafc-fc15-4968-96bc-27342bbd0068" to node "capz-lcwcec-md-0-jzv54".
I0906 20:44:06.665215       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" to node "capz-lcwcec-md-0-jzv54".
I0906 20:44:06.667826       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-b8rww.171260312f6a34c0, uid 72d9b05b-722c-4487-b8e7-dc9b78211a31, event type delete
... skipping 349 lines ...
I0906 20:44:40.418617       1 pv_controller.go:1108] reclaimVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: policy is Delete
I0906 20:44:40.418691       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]]
I0906 20:44:40.418703       1 pv_controller.go:1763] operation "delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]" is already running, skipping
I0906 20:44:40.418597       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051] started
I0906 20:44:40.431683       1 pv_controller.go:1340] isVolumeReleased[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is released
I0906 20:44:40.431796       1 pv_controller.go:1404] doDeleteVolume [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]
I0906 20:44:40.431905       1 pv_controller.go:1259] deletion of volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) since it's in attaching or detaching state
I0906 20:44:40.432005       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: set phase Failed
I0906 20:44:40.432084       1 pv_controller.go:858] updating PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: set phase Failed
I0906 20:44:40.436067       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" with version 3384
I0906 20:44:40.436096       1 pv_controller.go:879] volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" entered phase "Failed"
I0906 20:44:40.436106       1 pv_controller.go:901] volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) since it's in attaching or detaching state
E0906 20:44:40.436144       1 goroutinemap.go:150] Operation for "delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]" failed. No retries permitted until 2022-09-06 20:44:40.93612548 +0000 UTC m=+1209.378092576 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) since it's in attaching or detaching state
I0906 20:44:40.436422       1 event.go:291] "Event occurred" object="pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) since it's in attaching or detaching state"
I0906 20:44:40.436670       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" with version 3384
I0906 20:44:40.436767       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: phase: Failed, bound to: "azuredisk-8582/pvc-j8g6j (uid: 4d03ffc3-437e-4c73-8352-46c2f5f42051)", boundByController: true
I0906 20:44:40.436857       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is bound to claim azuredisk-8582/pvc-j8g6j
I0906 20:44:40.436996       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: claim azuredisk-8582/pvc-j8g6j not found
I0906 20:44:40.437133       1 pv_controller.go:1108] reclaimVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: policy is Delete
I0906 20:44:40.437269       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]]
I0906 20:44:40.437413       1 pv_controller.go:1765] operation "delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]" postponed due to exponential backoff
I0906 20:44:40.437572       1 pv_protection_controller.go:205] Got event on PV pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051
... skipping 54 lines ...
I0906 20:44:44.939837       1 pv_controller.go:858] updating PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: set phase Bound
I0906 20:44:44.939845       1 pv_controller.go:861] updating PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: phase Bound already set
I0906 20:44:44.939846       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-sv5c5] status: phase Bound already set
I0906 20:44:44.939857       1 pv_controller.go:1038] volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" bound to claim "azuredisk-8582/pvc-sv5c5"
I0906 20:44:44.939866       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" with version 3384
I0906 20:44:44.939873       1 pv_controller.go:1039] volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-sv5c5 (uid: 806a4baf-0448-43f6-8a26-81d2df6dfdd2)", boundByController: true
I0906 20:44:44.939883       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: phase: Failed, bound to: "azuredisk-8582/pvc-j8g6j (uid: 4d03ffc3-437e-4c73-8352-46c2f5f42051)", boundByController: true
I0906 20:44:44.939887       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-sv5c5" status after binding: phase: Bound, bound to: "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2", bindCompleted: true, boundByController: true
I0906 20:44:44.939901       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is bound to claim azuredisk-8582/pvc-j8g6j
I0906 20:44:44.939916       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: claim azuredisk-8582/pvc-j8g6j not found
I0906 20:44:44.939922       1 pv_controller.go:1108] reclaimVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: policy is Delete
I0906 20:44:44.939936       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]]
I0906 20:44:44.939966       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051] started
I0906 20:44:44.947833       1 pv_controller.go:1340] isVolumeReleased[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is released
I0906 20:44:44.947853       1 pv_controller.go:1404] doDeleteVolume [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]
I0906 20:44:44.947885       1 pv_controller.go:1259] deletion of volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) since it's in attaching or detaching state
I0906 20:44:44.947900       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: set phase Failed
I0906 20:44:44.947909       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: phase Failed already set
E0906 20:44:44.947941       1 goroutinemap.go:150] Operation for "delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]" failed. No retries permitted until 2022-09-06 20:44:45.947923718 +0000 UTC m=+1214.389890914 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) since it's in attaching or detaching state
I0906 20:44:45.009589       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0906 20:44:45.871900       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:44:50.225343       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0906 20:44:51.700521       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:44:53.385936       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="94.302µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56280" resp=200
I0906 20:44:55.557851       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051) returned with <nil>
... skipping 48 lines ...
I0906 20:44:59.939917       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: volume is bound to claim azuredisk-8582/pvc-4cmcw
I0906 20:44:59.939932       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: claim azuredisk-8582/pvc-4cmcw found: phase: Bound, bound to: "pvc-6121fafc-fc15-4968-96bc-27342bbd0068", bindCompleted: true, boundByController: true
I0906 20:44:59.939969       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: all is bound
I0906 20:44:59.939981       1 pv_controller.go:858] updating PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: set phase Bound
I0906 20:44:59.939990       1 pv_controller.go:861] updating PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: phase Bound already set
I0906 20:44:59.940002       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" with version 3384
I0906 20:44:59.940041       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: phase: Failed, bound to: "azuredisk-8582/pvc-j8g6j (uid: 4d03ffc3-437e-4c73-8352-46c2f5f42051)", boundByController: true
I0906 20:44:59.940066       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is bound to claim azuredisk-8582/pvc-j8g6j
I0906 20:44:59.940089       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: claim azuredisk-8582/pvc-j8g6j not found
I0906 20:44:59.940101       1 pv_controller.go:1108] reclaimVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: policy is Delete
I0906 20:44:59.940134       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]]
I0906 20:44:59.940180       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051] started
I0906 20:44:59.945937       1 pv_controller.go:1340] isVolumeReleased[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is released
... skipping 4 lines ...
I0906 20:45:07.156809       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-lcwcec-md-0-mcztr"
I0906 20:45:10.025629       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-md-0-mcztr ReadyCondition updated. Updating timestamp.
I0906 20:45:10.158275       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051
I0906 20:45:10.158307       1 pv_controller.go:1435] volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" deleted
I0906 20:45:10.158318       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: success
I0906 20:45:10.167685       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051" with version 3428
I0906 20:45:10.167737       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: phase: Failed, bound to: "azuredisk-8582/pvc-j8g6j (uid: 4d03ffc3-437e-4c73-8352-46c2f5f42051)", boundByController: true
I0906 20:45:10.167768       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: volume is bound to claim azuredisk-8582/pvc-j8g6j
I0906 20:45:10.167786       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: claim azuredisk-8582/pvc-j8g6j not found
I0906 20:45:10.167875       1 pv_controller.go:1108] reclaimVolume[pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051]: policy is Delete
I0906 20:45:10.167942       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051[062c4f6f-860f-4f1a-a1fa-6d855cca75f7]]
I0906 20:45:10.167692       1 pv_protection_controller.go:205] Got event on PV pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051
I0906 20:45:10.168023       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4d03ffc3-437e-4c73-8352-46c2f5f42051] started
... skipping 47 lines ...
I0906 20:45:10.853204       1 pv_controller.go:1108] reclaimVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: policy is Delete
I0906 20:45:10.853245       1 pv_controller.go:1752] scheduleOperation[delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]]
I0906 20:45:10.853303       1 pv_controller.go:1763] operation "delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]" is already running, skipping
I0906 20:45:10.853407       1 pv_controller.go:1231] deleteVolumeOperation [pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2] started
I0906 20:45:10.855114       1 pv_controller.go:1340] isVolumeReleased[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: volume is released
I0906 20:45:10.855130       1 pv_controller.go:1404] doDeleteVolume [pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]
I0906 20:45:10.855177       1 pv_controller.go:1259] deletion of volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) since it's in attaching or detaching state
I0906 20:45:10.855198       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: set phase Failed
I0906 20:45:10.855207       1 pv_controller.go:858] updating PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: set phase Failed
I0906 20:45:10.858333       1 pv_protection_controller.go:205] Got event on PV pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2
I0906 20:45:10.858334       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" with version 3435
I0906 20:45:10.858623       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: phase: Failed, bound to: "azuredisk-8582/pvc-sv5c5 (uid: 806a4baf-0448-43f6-8a26-81d2df6dfdd2)", boundByController: true
I0906 20:45:10.858796       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: volume is bound to claim azuredisk-8582/pvc-sv5c5
I0906 20:45:10.858850       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: claim azuredisk-8582/pvc-sv5c5 not found
I0906 20:45:10.858864       1 pv_controller.go:1108] reclaimVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: policy is Delete
I0906 20:45:10.858876       1 pv_controller.go:1752] scheduleOperation[delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]]
I0906 20:45:10.858888       1 pv_controller.go:1763] operation "delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]" is already running, skipping
I0906 20:45:10.858435       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" with version 3435
I0906 20:45:10.858960       1 pv_controller.go:879] volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" entered phase "Failed"
I0906 20:45:10.858972       1 pv_controller.go:901] volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) since it's in attaching or detaching state
E0906 20:45:10.859163       1 goroutinemap.go:150] Operation for "delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]" failed. No retries permitted until 2022-09-06 20:45:11.359146669 +0000 UTC m=+1239.801113765 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) since it's in attaching or detaching state
I0906 20:45:10.859464       1 event.go:291] "Event occurred" object="pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) since it's in attaching or detaching state"
I0906 20:45:13.385850       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.302µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44324" resp=200
I0906 20:45:13.858405       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 27 items received
I0906 20:45:14.043114       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0906 20:45:14.756117       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:45:14.834937       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:45:14.939149       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:45:14.939371       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" with version 3435
I0906 20:45:14.939550       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: phase: Failed, bound to: "azuredisk-8582/pvc-sv5c5 (uid: 806a4baf-0448-43f6-8a26-81d2df6dfdd2)", boundByController: true
I0906 20:45:14.939606       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: volume is bound to claim azuredisk-8582/pvc-sv5c5
I0906 20:45:14.939641       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: claim azuredisk-8582/pvc-sv5c5 not found
I0906 20:45:14.939650       1 pv_controller.go:1108] reclaimVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: policy is Delete
I0906 20:45:14.939696       1 pv_controller.go:1752] scheduleOperation[delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]]
I0906 20:45:14.939408       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-4cmcw" with version 3270
I0906 20:45:14.939773       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8582/pvc-4cmcw]: phase: Bound, bound to: "pvc-6121fafc-fc15-4968-96bc-27342bbd0068", bindCompleted: true, boundByController: true
... skipping 18 lines ...
I0906 20:45:14.943245       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: claim azuredisk-8582/pvc-4cmcw found: phase: Bound, bound to: "pvc-6121fafc-fc15-4968-96bc-27342bbd0068", bindCompleted: true, boundByController: true
I0906 20:45:14.943368       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: all is bound
I0906 20:45:14.943502       1 pv_controller.go:858] updating PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: set phase Bound
I0906 20:45:14.943620       1 pv_controller.go:861] updating PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: phase Bound already set
I0906 20:45:14.944801       1 pv_controller.go:1340] isVolumeReleased[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: volume is released
I0906 20:45:14.944819       1 pv_controller.go:1404] doDeleteVolume [pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]
I0906 20:45:14.944899       1 pv_controller.go:1259] deletion of volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) since it's in attaching or detaching state
I0906 20:45:14.944913       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: set phase Failed
I0906 20:45:14.944961       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: phase Failed already set
E0906 20:45:14.944992       1 goroutinemap.go:150] Operation for "delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]" failed. No retries permitted until 2022-09-06 20:45:15.944971409 +0000 UTC m=+1244.386938505 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) since it's in attaching or detaching state
I0906 20:45:15.890676       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0906 20:45:16.105851       1 azure_controller_standard.go:184] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) returned with <nil>
I0906 20:45:16.105926       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2) succeeded
I0906 20:45:16.105938       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2 was detached from node:capz-lcwcec-md-0-jzv54
I0906 20:45:16.105962       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2") on node "capz-lcwcec-md-0-jzv54" 
I0906 20:45:16.139895       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6121fafc-fc15-4968-96bc-27342bbd0068"
... skipping 17 lines ...
I0906 20:45:29.940056       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-4cmcw] status: set phase Bound
I0906 20:45:29.940081       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-4cmcw] status: phase Bound already set
I0906 20:45:29.940099       1 pv_controller.go:1038] volume "pvc-6121fafc-fc15-4968-96bc-27342bbd0068" bound to claim "azuredisk-8582/pvc-4cmcw"
I0906 20:45:29.940125       1 pv_controller.go:1039] volume "pvc-6121fafc-fc15-4968-96bc-27342bbd0068" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-4cmcw (uid: 6121fafc-fc15-4968-96bc-27342bbd0068)", boundByController: true
I0906 20:45:29.940143       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-4cmcw" status after binding: phase: Bound, bound to: "pvc-6121fafc-fc15-4968-96bc-27342bbd0068", bindCompleted: true, boundByController: true
I0906 20:45:29.940185       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" with version 3435
I0906 20:45:29.940237       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: phase: Failed, bound to: "azuredisk-8582/pvc-sv5c5 (uid: 806a4baf-0448-43f6-8a26-81d2df6dfdd2)", boundByController: true
I0906 20:45:29.940268       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: volume is bound to claim azuredisk-8582/pvc-sv5c5
I0906 20:45:29.940294       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: claim azuredisk-8582/pvc-sv5c5 not found
I0906 20:45:29.940308       1 pv_controller.go:1108] reclaimVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: policy is Delete
I0906 20:45:29.940323       1 pv_controller.go:1752] scheduleOperation[delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]]
I0906 20:45:29.940357       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6121fafc-fc15-4968-96bc-27342bbd0068" with version 3266
I0906 20:45:29.940381       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6121fafc-fc15-4968-96bc-27342bbd0068]: phase: Bound, bound to: "azuredisk-8582/pvc-4cmcw (uid: 6121fafc-fc15-4968-96bc-27342bbd0068)", boundByController: true
... skipping 11 lines ...
I0906 20:45:31.665622       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-6121fafc-fc15-4968-96bc-27342bbd0068" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-6121fafc-fc15-4968-96bc-27342bbd0068") on node "capz-lcwcec-md-0-jzv54" 
I0906 20:45:33.385304       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.802µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:60680" resp=200
I0906 20:45:35.143335       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2
I0906 20:45:35.143371       1 pv_controller.go:1435] volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" deleted
I0906 20:45:35.143382       1 pv_controller.go:1283] deleteVolumeOperation [pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: success
I0906 20:45:35.152259       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2" with version 3470
I0906 20:45:35.152493       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: phase: Failed, bound to: "azuredisk-8582/pvc-sv5c5 (uid: 806a4baf-0448-43f6-8a26-81d2df6dfdd2)", boundByController: true
I0906 20:45:35.152670       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: volume is bound to claim azuredisk-8582/pvc-sv5c5
I0906 20:45:35.152845       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: claim azuredisk-8582/pvc-sv5c5 not found
I0906 20:45:35.152955       1 pv_controller.go:1108] reclaimVolume[pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2]: policy is Delete
I0906 20:45:35.153109       1 pv_controller.go:1752] scheduleOperation[delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]]
I0906 20:45:35.153205       1 pv_controller.go:1763] operation "delete-pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2[b20b03fe-228c-416d-af5e-e71b7e35103e]" is already running, skipping
I0906 20:45:35.153304       1 pv_protection_controller.go:205] Got event on PV pvc-806a4baf-0448-43f6-8a26-81d2df6dfdd2
... skipping 123 lines ...
I0906 20:45:49.362349       1 pv_controller.go:1763] operation "provision-azuredisk-7051/pvc-n2dmv[35531e56-6897-46c9-b502-e7999449ded1]" is already running, skipping
I0906 20:45:49.368163       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1 StorageAccountType:Standard_LRS Size:10
I0906 20:45:50.031708       1 node_lifecycle_controller.go:1047] Node capz-lcwcec-control-plane-jfxgf ReadyCondition updated. Updating timestamp.
I0906 20:45:51.499657       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0906 20:45:51.540520       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-2bkgf, uid a9fb7dfa-1258-4b8f-8645-00c5e5673c3e, event type delete
I0906 20:45:51.553641       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.171260416b5c32b3, uid 86878178-e1b6-4e77-a2b7-afb9eae0f10a, event type delete
E0906 20:45:51.554687       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-fqhs8" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0906 20:45:51.559520       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.17126042c3b56f6c, uid 74d8cccc-b5af-472e-995f-b2b8ff221cad, event type delete
I0906 20:45:51.562940       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.171260453b2c4be7, uid a2220024-a56a-4c69-991d-ecfb2226ee5f, event type delete
I0906 20:45:51.565742       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.17126047b1c7c642, uid 1d934297-a6aa-4f91-b0e7-bef2272254d0, event type delete
I0906 20:45:51.568787       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.17126048115e1c60, uid b7e78d65-9f45-44a2-b9a3-ce4e2aad175f, event type delete
I0906 20:45:51.572384       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.17126048149d6704, uid 087aab95-c500-4d6a-9eb7-91a4bfb65016, event type delete
I0906 20:45:51.575763       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-2kmvr.171260481c170d1b, uid 8c7af9a0-e01b-47a2-9d53-17e9b4bf292d, event type delete
... skipping 67 lines ...
I0906 20:45:51.706794       1 pv_controller.go:1040] claim "azuredisk-7051/pvc-n2dmv" status after binding: phase: Bound, bound to: "pvc-35531e56-6897-46c9-b502-e7999449ded1", bindCompleted: true, boundByController: true
I0906 20:45:51.719116       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8582, estimate: 0, errors: <nil>
I0906 20:45:51.719485       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (2.5µs)
I0906 20:45:51.729168       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (232.806274ms)
I0906 20:45:52.065845       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0906 20:45:52.120609       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-6lnz6, uid 2db05b43-ee2b-4584-8dcf-c167acec229e, event type delete
E0906 20:45:52.155974       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-8pxzl" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0906 20:45:52.207077       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 29225db4-7686-4aa7-9363-a51ce276e531, event type delete
I0906 20:45:52.210581       1 publisher.go:186] Finished syncing namespace "azuredisk-7726" (3.748883ms)
I0906 20:45:52.254797       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0906 20:45:52.255086       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (2.4µs)
I0906 20:45:52.255257       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 2a407ae9-7998-48dc-a330-7809de52db4c, event type delete
I0906 20:45:52.268623       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
... skipping 9 lines ...
I0906 20:45:52.450055       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-35531e56-6897-46c9-b502-e7999449ded1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1") from node "capz-lcwcec-md-0-jzv54" 
I0906 20:45:52.494756       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1" to node "capz-lcwcec-md-0-jzv54".
I0906 20:45:52.549524       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1" lun 0 to node "capz-lcwcec-md-0-jzv54".
I0906 20:45:52.549569       1 azure_controller_standard.go:93] azureDisk - update(capz-lcwcec): vm(capz-lcwcec-md-0-jzv54) - attach disk(capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-35531e56-6897-46c9-b502-e7999449ded1) with DiskEncryptionSetID()
I0906 20:45:52.658778       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3086
I0906 20:45:52.682691       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-bdq8b, uid 31335f4d-8b73-4ac9-b239-a0b91fd18aad, event type delete
E0906 20:45:52.702658       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-g6kwh" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0906 20:45:52.739421       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid 77dd8dd2-b0a5-49fa-8608-52ba36ac6226, event type delete
I0906 20:45:52.742938       1 publisher.go:186] Finished syncing namespace "azuredisk-3086" (3.735483ms)
I0906 20:45:52.752136       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0906 20:45:52.752247       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.1µs)
I0906 20:45:52.752319       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 29aec440-801b-4162-8bb5-08d1c960e510, event type delete
I0906 20:45:52.807297       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2.4µs)
I0906 20:45:52.808792       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0906 20:45:52.822265       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (166.265296ms)
I0906 20:45:53.244878       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0906 20:45:53.270148       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-5t8dq, uid 7381d0d6-c8f6-44c5-aa92-67d6df6e88ee, event type delete
E0906 20:45:53.284063       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-6vg4w" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0906 20:45:53.315479       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0906 20:45:53.315726       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (3.6µs)
I0906 20:45:53.315803       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid a5715d3e-ca61-4180-bdb5-ea15583e7f83, event type delete
I0906 20:45:53.331620       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid f3496d65-18e5-446f-bf16-33aa4af51575, event type delete
I0906 20:45:53.334023       1 publisher.go:186] Finished syncing namespace "azuredisk-1387" (2.439454ms)
I0906 20:45:53.379246       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (3.2µs)
... skipping 727 lines ...
I0906 20:48:13.769895       1 stateful_set_control.go:451] StatefulSet azuredisk-9183/azuredisk-volume-tester-qxzr5 is waiting for Pod azuredisk-volume-tester-qxzr5-0 to be Running and Ready
I0906 20:48:13.769906       1 stateful_set_control.go:112] StatefulSet azuredisk-9183/azuredisk-volume-tester-qxzr5 pod status replicas=1 ready=0 current=1 updated=1
I0906 20:48:13.769914       1 stateful_set_control.go:120] StatefulSet azuredisk-9183/azuredisk-volume-tester-qxzr5 revisions current=azuredisk-volume-tester-qxzr5-7dc4c9c847 update=azuredisk-volume-tester-qxzr5-7dc4c9c847
I0906 20:48:13.769922       1 stateful_set.go:477] Successfully synced StatefulSet azuredisk-9183/azuredisk-volume-tester-qxzr5 successful
I0906 20:48:13.769930       1 stateful_set.go:431] Finished syncing statefulset "azuredisk-9183/azuredisk-volume-tester-qxzr5" (1.885741ms)
I0906 20:48:13.771943       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5906ab61-eca0-4051-8477-0a81e125f20a on node "capz-lcwcec-md-0-jzv54"
W0906 20:48:13.772186       1 reconciler.go:344] Multi-Attach error for volume "pvc-5906ab61-eca0-4051-8477-0a81e125f20a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-lcwcec/providers/Microsoft.Compute/disks/capz-lcwcec-dynamic-pvc-5906ab61-eca0-4051-8477-0a81e125f20a") from node "capz-lcwcec-md-0-mcztr" Volume is already exclusively attached to node capz-lcwcec-md-0-jzv54 and can't be attached to another
I0906 20:48:13.772598       1 event.go:291] "Event occurred" object="azuredisk-9183/azuredisk-volume-tester-qxzr5-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-5906ab61-eca0-4051-8477-0a81e125f20a\" Volume is already exclusively attached to one node and can't be attached to another"
I0906 20:48:13.782846       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-qxzr5-0"
I0906 20:48:13.783055       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-qxzr5-0, PodDisruptionBudget controller will avoid syncing.
I0906 20:48:13.783069       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-qxzr5-0"
I0906 20:48:13.782725       1 stateful_set.go:224] Pod azuredisk-volume-tester-qxzr5-0 updated, objectMeta {Name:azuredisk-volume-tester-qxzr5-0 GenerateName:azuredisk-volume-tester-qxzr5- Namespace:azuredisk-9183 SelfLink: UID:57c35fcf-c7dc-4d49-8c90-5a80951aba56 ResourceVersion:3875 Generation:0 CreationTimestamp:2022-09-06 20:48:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-907430288210826867 controller-revision-hash:azuredisk-volume-tester-qxzr5-7dc4c9c847 statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-qxzr5-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-qxzr5 UID:fd88ae09-ea57-4ff6-8f3b-24305dd2423f Controller:0xc000ad079e BlockOwnerDeletion:0xc000ad079f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:48:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd88ae09-ea57-4ff6-8f3b-24305dd2423f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-qxzr5-0 GenerateName:azuredisk-volume-tester-qxzr5- Namespace:azuredisk-9183 SelfLink: UID:57c35fcf-c7dc-4d49-8c90-5a80951aba56 ResourceVersion:3879 Generation:0 CreationTimestamp:2022-09-06 20:48:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-907430288210826867 controller-revision-hash:azuredisk-volume-tester-qxzr5-7dc4c9c847 statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-qxzr5-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-qxzr5 UID:fd88ae09-ea57-4ff6-8f3b-24305dd2423f Controller:0xc000a85ce7 BlockOwnerDeletion:0xc000a85ce8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:48:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fd88ae09-ea57-4ff6-8f3b-24305dd2423f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-06 20:48:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0906 20:48:13.783135       1 stateful_set.go:469] Syncing StatefulSet azuredisk-9183/azuredisk-volume-tester-qxzr5 with 1 pods
I0906 20:48:13.784200       1 stateful_set_control.go:376] StatefulSet azuredisk-9183/azuredisk-volume-tester-qxzr5 has 1 unhealthy Pods starting with azuredisk-volume-tester-qxzr5-0
... skipping 239 lines ...
I0906 20:48:50.746297       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8154
2022/09/06 20:48:50 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1216.207 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-v4bvl, container manager
STEP: Dumping workload cluster default/capz-lcwcec logs
Sep  6 20:50:17.266: INFO: Collecting logs for Linux node capz-lcwcec-control-plane-jfxgf in cluster capz-lcwcec in namespace default

Sep  6 20:51:17.267: INFO: Collecting boot logs for AzureMachine capz-lcwcec-control-plane-jfxgf

Failed to get logs for machine capz-lcwcec-control-plane-snvtv, cluster default/capz-lcwcec: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  6 20:51:18.227: INFO: Collecting logs for Linux node capz-lcwcec-md-0-mcztr in cluster capz-lcwcec in namespace default

Sep  6 20:52:18.230: INFO: Collecting boot logs for AzureMachine capz-lcwcec-md-0-mcztr

Failed to get logs for machine capz-lcwcec-md-0-7cfd8d8f4-n7lmv, cluster default/capz-lcwcec: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  6 20:52:18.574: INFO: Collecting logs for Linux node capz-lcwcec-md-0-jzv54 in cluster capz-lcwcec in namespace default

Sep  6 20:53:18.575: INFO: Collecting boot logs for AzureMachine capz-lcwcec-md-0-jzv54

Failed to get logs for machine capz-lcwcec-md-0-7cfd8d8f4-nkqjf, cluster default/capz-lcwcec: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-lcwcec kube-system pod logs
STEP: Fetching kube-system pod logs took 465.796919ms
STEP: Dumping workload cluster default/capz-lcwcec Azure activity log
STEP: Collecting events for Pod kube-system/kube-proxy-9ndgr
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-lcwcec-control-plane-jfxgf
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-lcwcec-control-plane-jfxgf
STEP: Creating log watcher for controller kube-system/kube-proxy-9ndgr, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-capz-lcwcec-control-plane-jfxgf"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-lcwcec-control-plane-jfxgf, container kube-controller-manager
STEP: failed to find events of Pod "kube-apiserver-capz-lcwcec-control-plane-jfxgf"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-lcwcec-control-plane-jfxgf, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-proxy-dzx68, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-node-lzq7q
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-gfmqc, container coredns
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-d7c7q, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-h722j, container calico-node
... skipping 2 lines ...
STEP: Creating log watcher for controller kube-system/kube-proxy-tzkxz, container kube-proxy
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-gfmqc
STEP: Creating log watcher for controller kube-system/etcd-capz-lcwcec-control-plane-jfxgf, container etcd
STEP: Collecting events for Pod kube-system/calico-node-h722j
STEP: Creating log watcher for controller kube-system/calico-node-lzq7q, container calico-node
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-lcwcec-control-plane-jfxgf
STEP: failed to find events of Pod "kube-scheduler-capz-lcwcec-control-plane-jfxgf"
STEP: Collecting events for Pod kube-system/kube-proxy-tzkxz
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-lcwcec-control-plane-jfxgf, container kube-scheduler
STEP: Collecting events for Pod kube-system/etcd-capz-lcwcec-control-plane-jfxgf
STEP: Creating log watcher for controller kube-system/calico-node-sdhn2, container calico-node
STEP: failed to find events of Pod "etcd-capz-lcwcec-control-plane-jfxgf"
STEP: Collecting events for Pod kube-system/metrics-server-8c95fb79b-qcxs2
STEP: Creating log watcher for controller kube-system/metrics-server-8c95fb79b-qcxs2, container metrics-server
STEP: Collecting events for Pod kube-system/calico-node-sdhn2
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-mkh47, container coredns
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-mkh47
STEP: Fetching activity logs took 5.229524232s
... skipping 17 lines ...