This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-03 20:09
Elapsed55m37s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 628 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-obexd2-kubeconfig; do sleep 1; done"
capz-obexd2-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-obexd2-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-obexd2-control-plane-xp4c2   NotReady   control-plane,master   5s    v1.22.14-rc.0.3+b89409c45e0dcb
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-obexd2-control-plane-xp4c2 condition met
node/capz-obexd2-mp-0000000 condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:33:24.727: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mrl7w" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  3 20:33:24.760: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 33.332889ms
Sep  3 20:33:26.793: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066704329s
Sep  3 20:33:28.828: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101804391s
Sep  3 20:33:30.861: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134793228s
Sep  3 20:33:32.894: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167676362s
Sep  3 20:33:34.928: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 10.201503006s
... skipping 2 lines ...
Sep  3 20:33:41.033: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 16.306899755s
Sep  3 20:33:43.069: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 18.34229221s
Sep  3 20:33:45.104: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 20.37794228s
Sep  3 20:33:47.140: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Pending", Reason="", readiness=false. Elapsed: 22.413015068s
Sep  3 20:33:49.175: INFO: Pod "azuredisk-volume-tester-mrl7w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.448299063s
STEP: Saw pod success
Sep  3 20:33:49.175: INFO: Pod "azuredisk-volume-tester-mrl7w" satisfied condition "Succeeded or Failed"
Sep  3 20:33:49.175: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-mrl7w"
Sep  3 20:33:49.223: INFO: Pod azuredisk-volume-tester-mrl7w has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-mrl7w in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:33:49.327: INFO: deleting PVC "azuredisk-8081"/"pvc-4sb79"
Sep  3 20:33:49.327: INFO: Deleting PersistentVolumeClaim "pvc-4sb79"
STEP: waiting for claim's PV "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" to be deleted
Sep  3 20:33:49.361: INFO: Waiting up to 10m0s for PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a to get deleted
Sep  3 20:33:49.393: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Released (32.434923ms)
Sep  3 20:33:54.430: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (5.069210066s)
Sep  3 20:33:59.466: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (10.104746469s)
Sep  3 20:34:04.500: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (15.139606601s)
Sep  3 20:34:09.545: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (20.184475065s)
Sep  3 20:34:14.580: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (25.219667363s)
Sep  3 20:34:19.613: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (30.252282014s)
Sep  3 20:34:24.649: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a found and phase=Failed (35.288388165s)
Sep  3 20:34:29.687: INFO: PersistentVolume pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a was removed
Sep  3 20:34:29.687: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  3 20:34:29.724: INFO: Claim "azuredisk-8081" in namespace "pvc-4sb79" doesn't exist in the system
Sep  3 20:34:29.724: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-q54g2
Sep  3 20:34:29.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  3 20:34:49.781: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-jsb4l"
Sep  3 20:34:49.835: INFO: Error getting logs for pod azuredisk-volume-tester-jsb4l: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-jsb4l)
STEP: Deleting pod azuredisk-volume-tester-jsb4l in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:34:49.947: INFO: deleting PVC "azuredisk-5466"/"pvc-bq8q8"
Sep  3 20:34:49.947: INFO: Deleting PersistentVolumeClaim "pvc-bq8q8"
STEP: waiting for claim's PV "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" to be deleted
... skipping 17 lines ...
Sep  3 20:36:10.663: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Bound (1m20.676261898s)
Sep  3 20:36:15.705: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Bound (1m25.718034722s)
Sep  3 20:36:20.743: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Bound (1m30.756272777s)
Sep  3 20:36:25.782: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Bound (1m35.795318532s)
Sep  3 20:36:30.821: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Bound (1m40.834404846s)
Sep  3 20:36:35.858: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Bound (1m45.871494989s)
Sep  3 20:36:40.898: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Failed (1m50.911017234s)
Sep  3 20:36:45.936: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Failed (1m55.948917389s)
Sep  3 20:36:50.977: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Failed (2m0.989754934s)
Sep  3 20:36:56.018: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Failed (2m6.031367598s)
Sep  3 20:37:01.056: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Failed (2m11.069163223s)
Sep  3 20:37:06.098: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 found and phase=Failed (2m16.110743009s)
Sep  3 20:37:11.135: INFO: PersistentVolume pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 was removed
Sep  3 20:37:11.135: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  3 20:37:11.172: INFO: Claim "azuredisk-5466" in namespace "pvc-bq8q8" doesn't exist in the system
Sep  3 20:37:11.172: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-p8xsj
Sep  3 20:37:11.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:37:11.950: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pp72q" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  3 20:37:11.993: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 42.937779ms
Sep  3 20:37:14.031: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081237345s
Sep  3 20:37:16.069: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119439429s
Sep  3 20:37:18.107: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.157153691s
Sep  3 20:37:20.146: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.196358603s
Sep  3 20:37:22.184: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.234781012s
Sep  3 20:37:24.223: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 12.273022939s
Sep  3 20:37:26.261: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 14.311296091s
Sep  3 20:37:28.300: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Pending", Reason="", readiness=false. Elapsed: 16.350688837s
Sep  3 20:37:30.340: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Running", Reason="", readiness=false. Elapsed: 18.390203089s
Sep  3 20:37:32.380: INFO: Pod "azuredisk-volume-tester-pp72q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.430086548s
STEP: Saw pod success
Sep  3 20:37:32.380: INFO: Pod "azuredisk-volume-tester-pp72q" satisfied condition "Succeeded or Failed"
Sep  3 20:37:32.380: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-pp72q"
Sep  3 20:37:32.429: INFO: Pod azuredisk-volume-tester-pp72q has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-pp72q in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:37:32.551: INFO: deleting PVC "azuredisk-2790"/"pvc-nwgnb"
Sep  3 20:37:32.551: INFO: Deleting PersistentVolumeClaim "pvc-nwgnb"
STEP: waiting for claim's PV "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" to be deleted
Sep  3 20:37:32.589: INFO: Waiting up to 10m0s for PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc to get deleted
Sep  3 20:37:32.632: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Released (43.218458ms)
Sep  3 20:37:37.674: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Failed (5.084745353s)
Sep  3 20:37:42.712: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Failed (10.122459428s)
Sep  3 20:37:47.750: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Failed (15.16042162s)
Sep  3 20:37:52.783: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Failed (20.193966728s)
Sep  3 20:37:57.817: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Failed (25.227674702s)
Sep  3 20:38:02.853: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc found and phase=Failed (30.264149893s)
Sep  3 20:38:07.887: INFO: PersistentVolume pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc was removed
Sep  3 20:38:07.887: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  3 20:38:07.920: INFO: Claim "azuredisk-2790" in namespace "pvc-nwgnb" doesn't exist in the system
Sep  3 20:38:07.920: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-ck82n
Sep  3 20:38:07.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  3 20:38:08.628: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-tfvbt" in namespace "azuredisk-5356" to be "Error status code"
Sep  3 20:38:08.660: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 32.022124ms
Sep  3 20:38:10.695: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066287164s
Sep  3 20:38:12.728: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099415167s
Sep  3 20:38:14.762: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133756066s
Sep  3 20:38:16.796: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167399828s
Sep  3 20:38:18.831: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202500336s
Sep  3 20:38:20.867: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.238319672s
Sep  3 20:38:22.901: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.273141057s
Sep  3 20:38:24.937: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.308791247s
Sep  3 20:38:26.973: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Running", Reason="", readiness=true. Elapsed: 18.344888062s
Sep  3 20:38:29.009: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Running", Reason="", readiness=false. Elapsed: 20.380391808s
Sep  3 20:38:31.045: INFO: Pod "azuredisk-volume-tester-tfvbt": Phase="Failed", Reason="", readiness=false. Elapsed: 22.416684093s
STEP: Saw pod failure
Sep  3 20:38:31.045: INFO: Pod "azuredisk-volume-tester-tfvbt" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  3 20:38:31.081: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-tfvbt"
Sep  3 20:38:31.116: INFO: Pod azuredisk-volume-tester-tfvbt has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-tfvbt in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:38:31.224: INFO: deleting PVC "azuredisk-5356"/"pvc-ljjqm"
Sep  3 20:38:31.224: INFO: Deleting PersistentVolumeClaim "pvc-ljjqm"
STEP: waiting for claim's PV "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" to be deleted
Sep  3 20:38:31.257: INFO: Waiting up to 10m0s for PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef to get deleted
Sep  3 20:38:31.290: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Released (33.012789ms)
Sep  3 20:38:36.324: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (5.066223151s)
Sep  3 20:38:41.357: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (10.099966271s)
Sep  3 20:38:46.395: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (15.137224167s)
Sep  3 20:38:51.433: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (20.175646146s)
Sep  3 20:38:56.470: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (25.212335756s)
Sep  3 20:39:01.507: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (30.249916358s)
Sep  3 20:39:06.542: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef found and phase=Failed (35.284669995s)
Sep  3 20:39:11.579: INFO: PersistentVolume pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef was removed
Sep  3 20:39:11.580: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  3 20:39:11.612: INFO: Claim "azuredisk-5356" in namespace "pvc-ljjqm" doesn't exist in the system
Sep  3 20:39:11.612: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-qtqkc
Sep  3 20:39:11.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 53 lines ...
Sep  3 20:40:20.075: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Bound (5.067528758s)
Sep  3 20:40:25.110: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Bound (10.101630917s)
Sep  3 20:40:30.146: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Bound (15.137910173s)
Sep  3 20:40:35.181: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Bound (20.173509977s)
Sep  3 20:40:40.217: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Bound (25.208717222s)
Sep  3 20:40:45.253: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Bound (30.244599055s)
Sep  3 20:40:50.290: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (35.281713515s)
Sep  3 20:40:55.323: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (40.315481301s)
Sep  3 20:41:00.357: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (45.349092797s)
Sep  3 20:41:05.394: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (50.386401787s)
Sep  3 20:41:10.430: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (55.422204403s)
Sep  3 20:41:15.465: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (1m0.457342543s)
Sep  3 20:41:20.500: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 found and phase=Failed (1m5.491584477s)
Sep  3 20:41:25.538: INFO: PersistentVolume pvc-072a8398-a5d1-4915-99c2-470203f38b81 was removed
Sep  3 20:41:25.538: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  3 20:41:25.571: INFO: Claim "azuredisk-5194" in namespace "pvc-tfmm6" doesn't exist in the system
Sep  3 20:41:25.571: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-2zkgk
Sep  3 20:41:25.605: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-86nf7"
Sep  3 20:41:25.640: INFO: Pod azuredisk-volume-tester-86nf7 has the following logs: 
... skipping 8 lines ...
Sep  3 20:41:30.845: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Bound (5.069313018s)
Sep  3 20:41:35.882: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Bound (10.105476177s)
Sep  3 20:41:40.919: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Bound (15.142885743s)
Sep  3 20:41:45.952: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Bound (20.175889079s)
Sep  3 20:41:50.990: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Bound (25.213953s)
Sep  3 20:41:56.028: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Bound (30.251700997s)
Sep  3 20:42:01.061: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Failed (35.284891969s)
Sep  3 20:42:06.095: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Failed (40.319067647s)
Sep  3 20:42:11.129: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Failed (45.353015983s)
Sep  3 20:42:16.166: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Failed (50.389380664s)
Sep  3 20:42:21.203: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 found and phase=Failed (55.427261055s)
Sep  3 20:42:26.238: INFO: PersistentVolume pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 was removed
Sep  3 20:42:26.238: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  3 20:42:26.271: INFO: Claim "azuredisk-5194" in namespace "pvc-lgrjn" doesn't exist in the system
Sep  3 20:42:26.271: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-z6kpz
Sep  3 20:42:26.305: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-fzs4g"
Sep  3 20:42:26.360: INFO: Pod azuredisk-volume-tester-fzs4g has the following logs: 
... skipping 8 lines ...
Sep  3 20:42:31.563: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Bound (5.066348696s)
Sep  3 20:42:36.598: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Bound (10.101122056s)
Sep  3 20:42:41.632: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Bound (15.135502221s)
Sep  3 20:42:46.669: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Bound (20.172485133s)
Sep  3 20:42:51.707: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Bound (25.209713s)
Sep  3 20:42:56.740: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Bound (30.242981029s)
Sep  3 20:43:01.777: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Failed (35.280068624s)
Sep  3 20:43:06.813: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Failed (40.316215428s)
Sep  3 20:43:11.850: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Failed (45.353490758s)
Sep  3 20:43:16.886: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Failed (50.388812157s)
Sep  3 20:43:21.923: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db found and phase=Failed (55.426392602s)
Sep  3 20:43:26.958: INFO: PersistentVolume pvc-e1990edf-8a88-417c-81c0-224719e387db was removed
Sep  3 20:43:26.958: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  3 20:43:26.991: INFO: Claim "azuredisk-5194" in namespace "pvc-qcpdx" doesn't exist in the system
Sep  3 20:43:26.991: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-nf5s4
Sep  3 20:43:27.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 59 lines ...
Sep  3 20:44:58.666: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Bound (5.065863127s)
Sep  3 20:45:03.707: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Bound (10.106605023s)
Sep  3 20:45:08.744: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Bound (15.143187674s)
Sep  3 20:45:13.777: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Bound (20.176294713s)
Sep  3 20:45:18.814: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Bound (25.213276256s)
Sep  3 20:45:23.851: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Bound (30.250373583s)
Sep  3 20:45:28.885: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Failed (35.284241754s)
Sep  3 20:45:33.920: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Failed (40.319384018s)
Sep  3 20:45:38.957: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Failed (45.356828312s)
Sep  3 20:45:43.995: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Failed (50.393918105s)
Sep  3 20:45:49.027: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Failed (55.426815027s)
Sep  3 20:45:54.061: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 found and phase=Failed (1m0.460760823s)
Sep  3 20:45:59.095: INFO: PersistentVolume pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 was removed
Sep  3 20:45:59.095: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  3 20:45:59.128: INFO: Claim "azuredisk-1353" in namespace "pvc-rdhpv" doesn't exist in the system
Sep  3 20:45:59.128: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-wkpnb
Sep  3 20:45:59.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:46:16.633: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-xhr5m" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  3 20:46:16.664: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 31.833107ms
Sep  3 20:46:18.698: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065318097s
Sep  3 20:46:20.734: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101214137s
Sep  3 20:46:22.769: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136923377s
Sep  3 20:46:24.804: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171901416s
Sep  3 20:46:26.839: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206871498s
... skipping 9 lines ...
Sep  3 20:46:47.195: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 30.562673898s
Sep  3 20:46:49.230: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 32.597903971s
Sep  3 20:46:51.267: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 34.634095581s
Sep  3 20:46:53.302: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Pending", Reason="", readiness=false. Elapsed: 36.669668732s
Sep  3 20:46:55.338: INFO: Pod "azuredisk-volume-tester-xhr5m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.705347337s
STEP: Saw pod success
Sep  3 20:46:55.338: INFO: Pod "azuredisk-volume-tester-xhr5m" satisfied condition "Succeeded or Failed"
Sep  3 20:46:55.338: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-xhr5m"
Sep  3 20:46:55.390: INFO: Pod azuredisk-volume-tester-xhr5m has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-xhr5m in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:46:55.495: INFO: deleting PVC "azuredisk-59"/"pvc-p4n6j"
Sep  3 20:46:55.495: INFO: Deleting PersistentVolumeClaim "pvc-p4n6j"
STEP: waiting for claim's PV "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" to be deleted
Sep  3 20:46:55.529: INFO: Waiting up to 10m0s for PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d to get deleted
Sep  3 20:46:55.561: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Released (32.115384ms)
Sep  3 20:47:00.598: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Failed (5.069407019s)
Sep  3 20:47:05.636: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Failed (10.10679664s)
Sep  3 20:47:10.672: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Failed (15.14339407s)
Sep  3 20:47:15.705: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Failed (20.176492828s)
Sep  3 20:47:20.741: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Failed (25.212312463s)
Sep  3 20:47:25.778: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d found and phase=Failed (30.249276935s)
Sep  3 20:47:30.813: INFO: PersistentVolume pvc-cddace50-1c99-46a0-a9a2-534753d84d5d was removed
Sep  3 20:47:30.813: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  3 20:47:30.845: INFO: Claim "azuredisk-59" in namespace "pvc-p4n6j" doesn't exist in the system
Sep  3 20:47:30.845: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-78nvl
STEP: validating provisioned PV
STEP: checking the PV
... skipping 10 lines ...
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:47:41.217: INFO: deleting PVC "azuredisk-59"/"pvc-mxnzl"
Sep  3 20:47:41.217: INFO: Deleting PersistentVolumeClaim "pvc-mxnzl"
STEP: waiting for claim's PV "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" to be deleted
Sep  3 20:47:41.255: INFO: Waiting up to 10m0s for PersistentVolume pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c to get deleted
Sep  3 20:47:41.300: INFO: PersistentVolume pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c found and phase=Failed (44.390677ms)
Sep  3 20:47:46.336: INFO: PersistentVolume pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c found and phase=Failed (5.080971834s)
Sep  3 20:47:51.374: INFO: PersistentVolume pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c found and phase=Failed (10.118973633s)
Sep  3 20:47:56.409: INFO: PersistentVolume pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c was removed
Sep  3 20:47:56.409: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  3 20:47:56.441: INFO: Claim "azuredisk-59" in namespace "pvc-mxnzl" doesn't exist in the system
Sep  3 20:47:56.441: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-b84c9
Sep  3 20:47:56.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-59" for this suite.
... skipping 27 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:47:57.258: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-q9r9z" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  3 20:47:57.290: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 32.060362ms
Sep  3 20:47:59.323: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065216403s
Sep  3 20:48:01.356: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09798932s
Sep  3 20:48:03.390: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 6.131492201s
Sep  3 20:48:05.424: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 8.165932698s
Sep  3 20:48:07.458: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 10.199711205s
... skipping 5 lines ...
Sep  3 20:48:19.663: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 22.40444418s
Sep  3 20:48:21.696: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 24.438196172s
Sep  3 20:48:23.732: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 26.474048064s
Sep  3 20:48:25.768: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Pending", Reason="", readiness=false. Elapsed: 28.509995368s
Sep  3 20:48:27.804: INFO: Pod "azuredisk-volume-tester-q9r9z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.545365579s
STEP: Saw pod success
Sep  3 20:48:27.804: INFO: Pod "azuredisk-volume-tester-q9r9z" satisfied condition "Succeeded or Failed"
Sep  3 20:48:27.804: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-q9r9z"
Sep  3 20:48:27.847: INFO: Pod azuredisk-volume-tester-q9r9z has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.067380 seconds, 1.4GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  3 20:48:27.951: INFO: deleting PVC "azuredisk-2546"/"pvc-w2kml"
Sep  3 20:48:27.951: INFO: Deleting PersistentVolumeClaim "pvc-w2kml"
STEP: waiting for claim's PV "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" to be deleted
Sep  3 20:48:27.985: INFO: Waiting up to 10m0s for PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b to get deleted
Sep  3 20:48:28.017: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Released (32.422782ms)
Sep  3 20:48:33.051: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (5.066377237s)
Sep  3 20:48:38.086: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (10.100622041s)
Sep  3 20:48:43.119: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (15.134229117s)
Sep  3 20:48:48.156: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (20.171114816s)
Sep  3 20:48:53.193: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (25.208442721s)
Sep  3 20:48:58.230: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (30.244670814s)
Sep  3 20:49:03.263: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (35.27826006s)
Sep  3 20:49:08.300: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (40.315048777s)
Sep  3 20:49:13.336: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (45.351544292s)
Sep  3 20:49:18.370: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (50.384955223s)
Sep  3 20:49:23.405: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b found and phase=Failed (55.420181019s)
Sep  3 20:49:28.438: INFO: PersistentVolume pvc-8c6e2523-50a7-4313-85af-e71838ed730b was removed
Sep  3 20:49:28.438: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  3 20:49:28.470: INFO: Claim "azuredisk-2546" in namespace "pvc-w2kml" doesn't exist in the system
Sep  3 20:49:28.470: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-pnjpm
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:49:40.886: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6jdqw" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  3 20:49:40.923: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 36.880538ms
Sep  3 20:49:42.962: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075800767s
Sep  3 20:49:45.001: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114833736s
Sep  3 20:49:47.041: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154566101s
Sep  3 20:49:49.080: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.193881305s
Sep  3 20:49:51.121: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.234816166s
... skipping 14 lines ...
Sep  3 20:50:21.654: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 40.768087118s
Sep  3 20:50:23.690: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 42.803705145s
Sep  3 20:50:25.725: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 44.839278702s
Sep  3 20:50:27.760: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Pending", Reason="", readiness=false. Elapsed: 46.874302837s
Sep  3 20:50:29.797: INFO: Pod "azuredisk-volume-tester-6jdqw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 48.910783641s
STEP: Saw pod success
Sep  3 20:50:29.797: INFO: Pod "azuredisk-volume-tester-6jdqw" satisfied condition "Succeeded or Failed"
Sep  3 20:50:29.797: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-6jdqw"
Sep  3 20:50:29.839: INFO: Pod azuredisk-volume-tester-6jdqw has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-6jdqw in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:50:29.945: INFO: deleting PVC "azuredisk-8582"/"pvc-rqzfv"
Sep  3 20:50:29.945: INFO: Deleting PersistentVolumeClaim "pvc-rqzfv"
STEP: waiting for claim's PV "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" to be deleted
Sep  3 20:50:29.979: INFO: Waiting up to 10m0s for PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 to get deleted
Sep  3 20:50:30.012: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Released (32.179931ms)
Sep  3 20:50:35.049: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (5.069421293s)
Sep  3 20:50:40.086: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (10.106928603s)
Sep  3 20:50:45.124: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (15.144174341s)
Sep  3 20:50:50.158: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (20.178250615s)
Sep  3 20:50:55.195: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (25.215729986s)
Sep  3 20:51:00.230: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (30.250194814s)
Sep  3 20:51:05.267: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (35.287048682s)
Sep  3 20:51:10.301: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (40.321664437s)
Sep  3 20:51:15.336: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (45.356701105s)
Sep  3 20:51:20.373: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (50.393864309s)
Sep  3 20:51:25.410: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (55.430204557s)
Sep  3 20:51:30.446: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (1m0.466752829s)
Sep  3 20:51:35.479: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 found and phase=Failed (1m5.499529471s)
Sep  3 20:51:40.516: INFO: PersistentVolume pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 was removed
Sep  3 20:51:40.516: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  3 20:51:40.549: INFO: Claim "azuredisk-8582" in namespace "pvc-rqzfv" doesn't exist in the system
Sep  3 20:51:40.549: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-vshzg
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:51:40.652: INFO: deleting PVC "azuredisk-8582"/"pvc-tgthq"
Sep  3 20:51:40.652: INFO: Deleting PersistentVolumeClaim "pvc-tgthq"
STEP: waiting for claim's PV "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" to be deleted
Sep  3 20:51:40.686: INFO: Waiting up to 10m0s for PersistentVolume pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b to get deleted
Sep  3 20:51:40.718: INFO: PersistentVolume pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b found and phase=Failed (32.499847ms)
Sep  3 20:51:45.755: INFO: PersistentVolume pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b found and phase=Failed (5.068716018s)
Sep  3 20:51:50.788: INFO: PersistentVolume pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b found and phase=Failed (10.101931864s)
Sep  3 20:51:55.824: INFO: PersistentVolume pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b was removed
Sep  3 20:51:55.824: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  3 20:51:55.860: INFO: Claim "azuredisk-8582" in namespace "pvc-tgthq" doesn't exist in the system
Sep  3 20:51:55.860: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-txfhn
STEP: validating provisioned PV
STEP: checking the PV
... skipping 391 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  3 20:55:01.774: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.348 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 247 lines ...
I0903 20:28:25.563472       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-09-03 20:21:19 +0000 UTC to 2032-08-31 20:26:19 +0000 UTC (now=2022-09-03 20:28:25.563420583 +0000 UTC))"
I0903 20:28:25.563924       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662236904\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662236904\" (2022-09-03 19:28:23 +0000 UTC to 2023-09-03 19:28:23 +0000 UTC (now=2022-09-03 20:28:25.563885497 +0000 UTC))"
I0903 20:28:25.564382       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662236905\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662236905\" (2022-09-03 19:28:24 +0000 UTC to 2023-09-03 19:28:24 +0000 UTC (now=2022-09-03 20:28:25.564345011 +0000 UTC))"
I0903 20:28:25.564566       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0903 20:28:25.572070       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0903 20:28:25.571054       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0903 20:28:27.112132       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0903 20:28:27.112323       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0903 20:28:29.875282       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0903 20:28:29.875792       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-obexd2-control-plane-xp4c2_c4fd201e-fe0c-469b-bcd7-0052f27965eb became leader"
W0903 20:28:29.915322       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0903 20:28:29.915930       1 azure_auth.go:232] Using AzurePublicCloud environment
I0903 20:28:29.915980       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0903 20:28:29.916035       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0903 20:28:29.917452       1 reflector.go:219] Starting reflector *v1.Node (17h41m58.541847654s) from k8s.io/client-go/informers/factory.go:134
I0903 20:28:29.917473       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0903 20:28:29.917732       1 reflector.go:219] Starting reflector *v1.Secret (17h41m58.541847654s) from k8s.io/client-go/informers/factory.go:134
I0903 20:28:29.917939       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0903 20:28:29.917749       1 reflector.go:219] Starting reflector *v1.ServiceAccount (17h41m58.541847654s) from k8s.io/client-go/informers/factory.go:134
I0903 20:28:29.918477       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0903 20:28:29.939047       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0903 20:28:29.939284       1 controllermanager.go:562] Starting "bootstrapsigner"
I0903 20:28:29.945398       1 controllermanager.go:577] Started "bootstrapsigner"
I0903 20:28:29.945418       1 controllermanager.go:562] Starting "nodeipam"
W0903 20:28:29.945425       1 controllermanager.go:569] Skipping "nodeipam"
I0903 20:28:29.945431       1 controllermanager.go:562] Starting "clusterrole-aggregation"
I0903 20:28:29.945569       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
... skipping 172 lines ...
I0903 20:28:32.480112       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0903 20:28:32.480407       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0903 20:28:32.480535       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0903 20:28:32.480634       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0903 20:28:32.480744       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0903 20:28:32.480865       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0903 20:28:32.481008       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 20:28:32.481110       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0903 20:28:32.481344       1 controllermanager.go:577] Started "attachdetach"
I0903 20:28:32.481369       1 controllermanager.go:562] Starting "persistentvolume-expander"
I0903 20:28:32.481434       1 attach_detach_controller.go:328] Starting attach detach controller
I0903 20:28:32.481448       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0903 20:28:32.629351       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
... skipping 44 lines ...
I0903 20:28:33.380648       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0903 20:28:33.380678       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0903 20:28:33.380756       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0903 20:28:33.380823       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0903 20:28:33.380852       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0903 20:28:33.380897       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0903 20:28:33.380919       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 20:28:33.380969       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0903 20:28:33.381077       1 controllermanager.go:577] Started "persistentvolume-binder"
I0903 20:28:33.381096       1 controllermanager.go:562] Starting "endpoint"
I0903 20:28:33.381237       1 pv_controller_base.go:308] Starting persistent volume controller
I0903 20:28:33.381251       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0903 20:28:33.529021       1 controllermanager.go:577] Started "endpoint"
... skipping 4 lines ...
I0903 20:28:33.680344       1 controllermanager.go:562] Starting "horizontalpodautoscaling"
I0903 20:28:33.680464       1 endpointslice_controller.go:257] Starting endpoint slice controller
I0903 20:28:33.680528       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0903 20:28:33.837183       1 controller.go:693] Ignoring node capz-obexd2-control-plane-xp4c2 with Ready condition status False
I0903 20:28:33.837234       1 controller.go:272] Triggering nodeSync
I0903 20:28:33.837597       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-control-plane-xp4c2"
W0903 20:28:33.837632       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-obexd2-control-plane-xp4c2" does not exist
I0903 20:28:33.851969       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-control-plane-xp4c2"
I0903 20:28:33.926160       1 request.go:597] Waited for 82.315322ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/horizontal-pod-autoscaler
I0903 20:28:33.976269       1 request.go:597] Waited for 98.10708ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller/token
I0903 20:28:33.985504       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-obexd2-control-plane-xp4c2" new_ttl="0s"
I0903 20:28:33.989018       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-control-plane-xp4c2"
I0903 20:28:33.995813       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-obexd2-control-plane-xp4c2" new_ttl="0s"
... skipping 333 lines ...
I0903 20:28:35.504821       1 daemon_controller.go:226] Adding daemon set kube-proxy
I0903 20:28:35.511590       1 endpointslicemirroring_controller.go:274] syncEndpoints("kube-system/kube-dns")
I0903 20:28:35.511772       1 endpointslicemirroring_controller.go:309] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
I0903 20:28:35.511982       1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (425.111µs)
I0903 20:28:35.514930       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (71.810991ms)
I0903 20:28:35.522963       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="104.702312ms"
I0903 20:28:35.523082       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:28:35.523163       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 20:28:35.523110071 +0000 UTC m=+11.942891827"
I0903 20:28:35.523752       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 20:28:35 +0000 UTC - now: 2022-09-03 20:28:35.523746987 +0000 UTC m=+11.943528643]
I0903 20:28:35.528023       1 controller_utils.go:581] Controller coredns-78fcd69978 created pod coredns-78fcd69978-zvcx9
I0903 20:28:35.528419       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-zvcx9"
I0903 20:28:35.530391       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/coredns-78fcd69978-zvcx9"
I0903 20:28:35.530649       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (20.4µs)
... skipping 120 lines ...
I0903 20:28:36.805504       1 controller_utils.go:581] Controller calico-kube-controllers-969cf87c4 created pod calico-kube-controllers-969cf87c4-r9699
I0903 20:28:36.805682       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0903 20:28:36.806290       1 event.go:291] "Event occurred" object="kube-system/calico-kube-controllers-969cf87c4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-969cf87c4-r9699"
I0903 20:28:36.804731       1 replica_set.go:380] Pod calico-kube-controllers-969cf87c4-r9699 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-kube-controllers-969cf87c4-r9699", GenerateName:"calico-kube-controllers-969cf87c4-", Namespace:"kube-system", SelfLink:"", UID:"8f4701d1-dac7-4e94-8d51-5916c9bedb6f", ResourceVersion:"487", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63797833716, loc:(*time.Location)(0x751a1a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"969cf87c4"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-kube-controllers-969cf87c4", UID:"cdb3eb14-363b-45c8-a1e7-f866f4674d07", Controller:(*bool)(0xc002000047), BlockOwnerDeletion:(*bool)(0xc002000048)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f08960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f08978), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-st8dd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0016006c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-kube-controllers", Image:"docker.io/calico/kube-controllers:v3.23.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ENABLED_CONTROLLERS", Value:"node", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-st8dd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001e5bc40), ReadinessProbe:(*v1.Probe)(0xc001e5bc80), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020000e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-kube-controllers", DeprecatedServiceAccount:"calico-kube-controllers", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002f50a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002000140)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002000160)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002000168), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00200016c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001f6fa10), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0903 20:28:36.806614       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd0c1d2f7398cc, ext:13215886664, loc:(*time.Location)(0x751a1a0)}}
I0903 20:28:36.810088       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="28.92073ms"
I0903 20:28:36.810165       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:28:36.810247       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 20:28:36.810226177 +0000 UTC m=+13.230007833"
I0903 20:28:36.811230       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 20:28:36 +0000 UTC - now: 2022-09-03 20:28:36.811223767 +0000 UTC m=+13.231005523]
I0903 20:28:36.811326       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (15.141658ms)
I0903 20:28:36.811360       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd0c1d2f7398cc, ext:13215886664, loc:(*time.Location)(0x751a1a0)}}
I0903 20:28:36.811545       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 20:28:36.811935       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-969cf87c4"
... skipping 562 lines ...
I0903 20:29:12.938511       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 20:29:12.938469181 +0000 UTC m=+49.358250837"
I0903 20:29:12.939075       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="594.504µs"
I0903 20:29:13.455826       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="84.901µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41398" resp=200
I0903 20:29:13.889793       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (353.503µs)
I0903 20:29:14.432485       1 gc_controller.go:161] GC'ing orphaned
I0903 20:29:14.432512       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:29:14.878858       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-obexd2-control-plane-xp4c2 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 20:28:50 +0000 UTC,LastTransitionTime:2022-09-03 20:28:10 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 20:29:10 +0000 UTC,LastTransitionTime:2022-09-03 20:29:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 20:29:14.878954       1 node_lifecycle_controller.go:1047] Node capz-obexd2-control-plane-xp4c2 ReadyCondition updated. Updating timestamp.
I0903 20:29:14.902681       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-obexd2-control-plane-xp4c2}
I0903 20:29:14.902706       1 taint_manager.go:440] "Updating known taints on node" node="capz-obexd2-control-plane-xp4c2" taints=[]
I0903 20:29:14.902739       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-obexd2-control-plane-xp4c2"
I0903 20:29:14.902750       1 timed_workers.go:132] Cancelling TimedWorkerQueue item kube-system/calico-kube-controllers-969cf87c4-r9699 at 2022-09-03 20:29:14.902746758 +0000 UTC m=+51.322528514
I0903 20:29:14.902770       1 timed_workers.go:132] Cancelling TimedWorkerQueue item kube-system/coredns-78fcd69978-zvcx9 at 2022-09-03 20:29:14.902767958 +0000 UTC m=+51.322549614
... skipping 91 lines ...
I0903 20:30:16.074187       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0c204add4729, ext:25602055589, loc:(*time.Location)(0x751a1a0)}}
I0903 20:30:16.074505       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0c360470c642, ext:112494281406, loc:(*time.Location)(0x751a1a0)}}
I0903 20:30:16.074654       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-obexd2-mp-0000000], creating 1
I0903 20:30:16.075341       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-obexd2-mp-0000000}
I0903 20:30:16.075496       1 taint_manager.go:440] "Updating known taints on node" node="capz-obexd2-mp-0000000" taints=[]
I0903 20:30:16.075685       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
W0903 20:30:16.075808       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-obexd2-mp-0000000" does not exist
I0903 20:30:16.076603       1 controller.go:693] Ignoring node capz-obexd2-mp-0000000 with Ready condition status False
I0903 20:30:16.078858       1 controller.go:272] Triggering nodeSync
I0903 20:30:16.078878       1 controller.go:291] nodeSync has been triggered
I0903 20:30:16.078886       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 20:30:16.078894       1 controller.go:804] Finished updateLoadBalancerHosts
I0903 20:30:16.078920       1 controller.go:731] It took 3.4301e-05 seconds to finish nodeSyncInternal
... skipping 224 lines ...
I0903 20:30:32.515128       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-kzbct, PodDisruptionBudget controller will avoid syncing.
I0903 20:30:32.515136       1 disruption.go:430] No matching pdb for pod "calico-node-kzbct"
I0903 20:30:32.580836       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0c379925d5e5, ext:118841691645, loc:(*time.Location)(0x751a1a0)}}
I0903 20:30:32.580961       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0c3a22a0b363, ext:129000738783, loc:(*time.Location)(0x751a1a0)}}
I0903 20:30:32.585062       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-obexd2-mp-0000001], creating 1
I0903 20:30:32.582580       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000001"
W0903 20:30:32.585635       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-obexd2-mp-0000001" does not exist
I0903 20:30:32.582648       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-obexd2-mp-0000001}
I0903 20:30:32.585865       1 taint_manager.go:440] "Updating known taints on node" node="capz-obexd2-mp-0000001" taints=[]
I0903 20:30:32.582701       1 controller.go:693] Ignoring node capz-obexd2-mp-0000000 with Ready condition status False
I0903 20:30:32.586133       1 controller.go:693] Ignoring node capz-obexd2-mp-0000001 with Ready condition status False
I0903 20:30:32.586253       1 controller.go:272] Triggering nodeSync
I0903 20:30:32.586369       1 controller.go:291] nodeSync has been triggered
... skipping 302 lines ...
I0903 20:30:49.862513       1 disruption.go:430] No matching pdb for pod "calico-node-l462m"
I0903 20:30:49.862594       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0c3e736a1709, ext:146282372385, loc:(*time.Location)(0x751a1a0)}}
I0903 20:30:49.862632       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0903 20:30:49.862701       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0903 20:30:49.862729       1 daemon_controller.go:1102] Updating daemon set status
I0903 20:30:49.862859       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.454336ms)
I0903 20:30:49.918699       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-obexd2-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 20:30:26 +0000 UTC,LastTransitionTime:2022-09-03 20:30:16 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 20:30:46 +0000 UTC,LastTransitionTime:2022-09-03 20:30:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 20:30:49.918818       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 20:30:49.929816       1 node_lifecycle_controller.go:893] Node capz-obexd2-mp-0000000 is healthy again, removing all taints
I0903 20:30:49.931422       1 node_lifecycle_controller.go:1214] Controller detected that zone canadacentral::0 is now in state Normal.
I0903 20:30:49.931996       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-obexd2-mp-0000000}
I0903 20:30:49.932021       1 taint_manager.go:440] "Updating known taints on node" node="capz-obexd2-mp-0000000" taints=[]
I0903 20:30:49.932062       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
... skipping 75 lines ...
I0903 20:31:04.150959       1 daemon_controller.go:1102] Updating daemon set status
I0903 20:31:04.151055       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (1.559623ms)
I0903 20:31:04.404968       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:31:04.416140       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:31:04.489397       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:31:04.795207       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:31:04.934265       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-obexd2-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 20:30:42 +0000 UTC,LastTransitionTime:2022-09-03 20:30:32 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 20:31:02 +0000 UTC,LastTransitionTime:2022-09-03 20:31:02 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 20:31:04.934357       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:31:04.944463       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000001"
I0903 20:31:04.944574       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-obexd2-mp-0000001}
I0903 20:31:04.944596       1 taint_manager.go:440] "Updating known taints on node" node="capz-obexd2-mp-0000001" taints=[]
I0903 20:31:04.944615       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-obexd2-mp-0000001"
I0903 20:31:04.946240       1 node_lifecycle_controller.go:893] Node capz-obexd2-mp-0000001 is healthy again, removing all taints
... skipping 291 lines ...
I0903 20:33:49.365743       1 pv_controller.go:1108] reclaimVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: policy is Delete
I0903 20:33:49.365751       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]]
I0903 20:33:49.365756       1 pv_controller.go:1763] operation "delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]" is already running, skipping
I0903 20:33:49.365778       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a] started
I0903 20:33:49.368039       1 pv_controller.go:1340] isVolumeReleased[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is released
I0903 20:33:49.368088       1 pv_controller.go:1404] doDeleteVolume [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]
I0903 20:33:49.403491       1 pv_controller.go:1259] deletion of volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:33:49.403781       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: set phase Failed
I0903 20:33:49.403950       1 pv_controller.go:858] updating PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: set phase Failed
I0903 20:33:49.408864       1 pv_protection_controller.go:205] Got event on PV pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a
I0903 20:33:49.409109       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" with version 1235
I0903 20:33:49.409301       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: phase: Failed, bound to: "azuredisk-8081/pvc-4sb79 (uid: 0513d4a4-9fdd-4056-8046-f24d477fb28a)", boundByController: true
I0903 20:33:49.409483       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is bound to claim azuredisk-8081/pvc-4sb79
I0903 20:33:49.409680       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: claim azuredisk-8081/pvc-4sb79 not found
I0903 20:33:49.409776       1 pv_controller.go:1108] reclaimVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: policy is Delete
I0903 20:33:49.409816       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]]
I0903 20:33:49.409824       1 pv_controller.go:1763] operation "delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]" is already running, skipping
I0903 20:33:49.410001       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" with version 1235
I0903 20:33:49.410028       1 pv_controller.go:879] volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" entered phase "Failed"
I0903 20:33:49.410037       1 pv_controller.go:901] volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:33:49.410102       1 goroutinemap.go:150] Operation for "delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]" failed. No retries permitted until 2022-09-03 20:33:49.910083499 +0000 UTC m=+326.329865255 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:33:49.410336       1 event.go:291] "Event occurred" object="pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:33:49.411507       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:33:49.495660       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:33:49.495722       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" with version 1235
I0903 20:33:49.495760       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: phase: Failed, bound to: "azuredisk-8081/pvc-4sb79 (uid: 0513d4a4-9fdd-4056-8046-f24d477fb28a)", boundByController: true
I0903 20:33:49.495804       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is bound to claim azuredisk-8081/pvc-4sb79
I0903 20:33:49.495830       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: claim azuredisk-8081/pvc-4sb79 not found
I0903 20:33:49.495843       1 pv_controller.go:1108] reclaimVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: policy is Delete
I0903 20:33:49.495860       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]]
I0903 20:33:49.495871       1 pv_controller.go:1765] operation "delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]" postponed due to exponential backoff
I0903 20:33:49.968603       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000000 ReadyCondition updated. Updating timestamp.
... skipping 19 lines ...
I0903 20:34:03.455275       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="60.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41344" resp=200
I0903 20:34:04.412360       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:34:04.420625       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:34:04.421644       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 10 items received
I0903 20:34:04.496721       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:34:04.496785       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" with version 1235
I0903 20:34:04.496826       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: phase: Failed, bound to: "azuredisk-8081/pvc-4sb79 (uid: 0513d4a4-9fdd-4056-8046-f24d477fb28a)", boundByController: true
I0903 20:34:04.496861       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is bound to claim azuredisk-8081/pvc-4sb79
I0903 20:34:04.496883       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: claim azuredisk-8081/pvc-4sb79 not found
I0903 20:34:04.496892       1 pv_controller.go:1108] reclaimVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: policy is Delete
I0903 20:34:04.496908       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]]
I0903 20:34:04.496939       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a] started
I0903 20:34:04.499326       1 pv_controller.go:1340] isVolumeReleased[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is released
I0903 20:34:04.499346       1 pv_controller.go:1404] doDeleteVolume [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]
I0903 20:34:04.499491       1 pv_controller.go:1259] deletion of volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) since it's in attaching or detaching state
I0903 20:34:04.499509       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: set phase Failed
I0903 20:34:04.499519       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: phase Failed already set
E0903 20:34:04.499605       1 goroutinemap.go:150] Operation for "delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]" failed. No retries permitted until 2022-09-03 20:34:05.499590302 +0000 UTC m=+341.919371958 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) since it's in attaching or detaching state
I0903 20:34:04.915527       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:34:07.415734       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 93 items received
I0903 20:34:09.418558       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0903 20:34:09.722127       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:34:11.861643       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) returned with <nil>
I0903 20:34:11.861696       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a) succeeded
... skipping 3 lines ...
I0903 20:34:13.986870       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0903 20:34:14.444288       1 gc_controller.go:161] GC'ing orphaned
I0903 20:34:14.444323       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:34:19.412902       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:34:19.497047       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:34:19.497202       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" with version 1235
I0903 20:34:19.497342       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: phase: Failed, bound to: "azuredisk-8081/pvc-4sb79 (uid: 0513d4a4-9fdd-4056-8046-f24d477fb28a)", boundByController: true
I0903 20:34:19.497439       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is bound to claim azuredisk-8081/pvc-4sb79
I0903 20:34:19.497520       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: claim azuredisk-8081/pvc-4sb79 not found
I0903 20:34:19.497555       1 pv_controller.go:1108] reclaimVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: policy is Delete
I0903 20:34:19.497630       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]]
I0903 20:34:19.497731       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a] started
I0903 20:34:19.501952       1 pv_controller.go:1340] isVolumeReleased[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is released
... skipping 3 lines ...
I0903 20:34:24.706589       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a
I0903 20:34:24.706629       1 pv_controller.go:1435] volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" deleted
I0903 20:34:24.706645       1 pv_controller.go:1283] deleteVolumeOperation [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: success
I0903 20:34:24.713467       1 pv_protection_controller.go:205] Got event on PV pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a
I0903 20:34:24.713514       1 pv_protection_controller.go:125] Processing PV pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a
I0903 20:34:24.713804       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" with version 1288
I0903 20:34:24.713836       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: phase: Failed, bound to: "azuredisk-8081/pvc-4sb79 (uid: 0513d4a4-9fdd-4056-8046-f24d477fb28a)", boundByController: true
I0903 20:34:24.713863       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: volume is bound to claim azuredisk-8081/pvc-4sb79
I0903 20:34:24.713878       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: claim azuredisk-8081/pvc-4sb79 not found
I0903 20:34:24.713885       1 pv_controller.go:1108] reclaimVolume[pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a]: policy is Delete
I0903 20:34:24.713899       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a[6cdca1ff-99a8-4ea6-9e8f-eb207b15e0aa]]
I0903 20:34:24.713917       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a] started
I0903 20:34:24.717033       1 pv_controller.go:1243] Volume "pvc-0513d4a4-9fdd-4056-8046-f24d477fb28a" is already being deleted
... skipping 169 lines ...
I0903 20:34:35.528797       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-t862r, uid 3d43c58b-ed12-4df0-b75a-0f427fcb3df5, event type delete
I0903 20:34:35.598153       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0903 20:34:35.598470       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3µs)
I0903 20:34:35.621053       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (156.068767ms)
I0903 20:34:36.086192       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0903 20:34:36.144775       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-b2kzk, uid b69fda1e-92d3-480a-ab0e-47ee88a4f92a, event type delete
E0903 20:34:36.157127       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-77gmk" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0903 20:34:36.164164       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid a6f32cb4-20bf-4620-afe1-e67d68ceb170, event type delete
I0903 20:34:36.167083       1 publisher.go:186] Finished syncing namespace "azuredisk-4728" (2.75234ms)
I0903 20:34:36.187751       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0903 20:34:36.187932       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.401µs)
I0903 20:34:36.187932       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid ce388c67-8b65-4062-a739-6e3b16b21403, event type delete
I0903 20:34:36.239098       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (3.3µs)
... skipping 352 lines ...
I0903 20:36:39.667913       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]]
I0903 20:36:39.667920       1 pv_controller.go:1763] operation "delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]" is already running, skipping
I0903 20:36:39.667976       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0] started
I0903 20:36:39.667424       1 pv_protection_controller.go:205] Got event on PV pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0
I0903 20:36:39.669753       1 pv_controller.go:1340] isVolumeReleased[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is released
I0903 20:36:39.669944       1 pv_controller.go:1404] doDeleteVolume [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]
I0903 20:36:39.708138       1 pv_controller.go:1259] deletion of volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted
I0903 20:36:39.708161       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: set phase Failed
I0903 20:36:39.708169       1 pv_controller.go:858] updating PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: set phase Failed
I0903 20:36:39.710906       1 pv_protection_controller.go:205] Got event on PV pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0
I0903 20:36:39.710943       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" with version 1555
I0903 20:36:39.710971       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: phase: Failed, bound to: "azuredisk-5466/pvc-bq8q8 (uid: 7016454c-6d15-4c5e-81f2-8d7d66d5fce0)", boundByController: true
I0903 20:36:39.710995       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is bound to claim azuredisk-5466/pvc-bq8q8
I0903 20:36:39.711019       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: claim azuredisk-5466/pvc-bq8q8 not found
I0903 20:36:39.711029       1 pv_controller.go:1108] reclaimVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: policy is Delete
I0903 20:36:39.711044       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]]
I0903 20:36:39.711053       1 pv_controller.go:1763] operation "delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]" is already running, skipping
I0903 20:36:39.711906       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" with version 1555
I0903 20:36:39.712054       1 pv_controller.go:879] volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" entered phase "Failed"
I0903 20:36:39.712070       1 pv_controller.go:901] volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted
E0903 20:36:39.712124       1 goroutinemap.go:150] Operation for "delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]" failed. No retries permitted until 2022-09-03 20:36:40.212096504 +0000 UTC m=+496.631878160 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted
I0903 20:36:39.712211       1 event.go:291] "Event occurred" object="pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted"
I0903 20:36:42.883417       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:36:43.068182       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000001"
I0903 20:36:43.068218       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 to the node "capz-obexd2-mp-0000001" mounted false
I0903 20:36:43.104394       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000001"
I0903 20:36:43.104430       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0 to the node "capz-obexd2-mp-0000001" mounted false
... skipping 7 lines ...
I0903 20:36:43.454856       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="87.303µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57180" resp=200
I0903 20:36:44.999599       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:36:45.249160       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 17 items received
I0903 20:36:49.418682       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:36:49.503738       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:36:49.503816       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" with version 1555
I0903 20:36:49.503864       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: phase: Failed, bound to: "azuredisk-5466/pvc-bq8q8 (uid: 7016454c-6d15-4c5e-81f2-8d7d66d5fce0)", boundByController: true
I0903 20:36:49.503913       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is bound to claim azuredisk-5466/pvc-bq8q8
I0903 20:36:49.503933       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: claim azuredisk-5466/pvc-bq8q8 not found
I0903 20:36:49.503948       1 pv_controller.go:1108] reclaimVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: policy is Delete
I0903 20:36:49.503965       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]]
I0903 20:36:49.503999       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0] started
I0903 20:36:49.509858       1 pv_controller.go:1340] isVolumeReleased[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is released
I0903 20:36:49.509881       1 pv_controller.go:1404] doDeleteVolume [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]
I0903 20:36:49.509919       1 pv_controller.go:1259] deletion of volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0) since it's in attaching or detaching state
I0903 20:36:49.509936       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: set phase Failed
I0903 20:36:49.509948       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: phase Failed already set
E0903 20:36:49.509977       1 goroutinemap.go:150] Operation for "delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]" failed. No retries permitted until 2022-09-03 20:36:50.509958873 +0000 UTC m=+506.929740629 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0) since it's in attaching or detaching state
I0903 20:36:50.922377       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 60 items received
I0903 20:36:52.409593       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 3 items received
I0903 20:36:53.462400       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="67.608µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:33454" resp=200
I0903 20:36:53.632753       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 14 items received
I0903 20:36:54.435089       1 controller.go:272] Triggering nodeSync
I0903 20:36:54.435127       1 controller.go:291] nodeSync has been triggered
... skipping 13 lines ...
I0903 20:36:59.420262       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
I0903 20:37:03.454388       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="97.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55562" resp=200
I0903 20:37:04.418979       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:37:04.424136       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:37:04.504744       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:37:04.504807       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" with version 1555
I0903 20:37:04.505011       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: phase: Failed, bound to: "azuredisk-5466/pvc-bq8q8 (uid: 7016454c-6d15-4c5e-81f2-8d7d66d5fce0)", boundByController: true
I0903 20:37:04.505084       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is bound to claim azuredisk-5466/pvc-bq8q8
I0903 20:37:04.505109       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: claim azuredisk-5466/pvc-bq8q8 not found
I0903 20:37:04.505118       1 pv_controller.go:1108] reclaimVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: policy is Delete
I0903 20:37:04.505150       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]]
I0903 20:37:04.505255       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0] started
I0903 20:37:04.514203       1 pv_controller.go:1340] isVolumeReleased[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is released
... skipping 4 lines ...
I0903 20:37:09.782685       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0
I0903 20:37:09.782940       1 pv_controller.go:1435] volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" deleted
I0903 20:37:09.782962       1 pv_controller.go:1283] deleteVolumeOperation [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: success
I0903 20:37:09.787285       1 pv_protection_controller.go:205] Got event on PV pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0
I0903 20:37:09.787564       1 pv_protection_controller.go:125] Processing PV pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0
I0903 20:37:09.787314       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" with version 1601
I0903 20:37:09.788467       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: phase: Failed, bound to: "azuredisk-5466/pvc-bq8q8 (uid: 7016454c-6d15-4c5e-81f2-8d7d66d5fce0)", boundByController: true
I0903 20:37:09.788740       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: volume is bound to claim azuredisk-5466/pvc-bq8q8
I0903 20:37:09.789016       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: claim azuredisk-5466/pvc-bq8q8 not found
I0903 20:37:09.789272       1 pv_controller.go:1108] reclaimVolume[pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0]: policy is Delete
I0903 20:37:09.789471       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0[4c57fc46-fb40-4333-8be8-163f616cdb3b]]
I0903 20:37:09.789678       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0] started
I0903 20:37:09.793452       1 pv_controller_base.go:235] volume "pvc-7016454c-6d15-4c5e-81f2-8d7d66d5fce0" deleted
... skipping 106 lines ...
I0903 20:37:15.013749       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc") from node "capz-obexd2-mp-0000000" 
I0903 20:37:15.013834       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" to node "capz-obexd2-mp-0000000".
I0903 20:37:15.063738       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" lun 0 to node "capz-obexd2-mp-0000000".
I0903 20:37:15.063784       1 azure_controller_vmss.go:101] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - attach disk(capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) with DiskEncryptionSetID()
I0903 20:37:16.289443       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5466
I0903 20:37:16.304501       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-tgjws, uid 082ceb27-d326-4432-8ec9-69976da9f4d1, event type delete
E0903 20:37:16.318601       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-vkdbw" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0903 20:37:16.341199       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5466, name kube-root-ca.crt, uid 45d6fce9-7a62-4d6a-8169-ef20a405cc99, event type delete
I0903 20:37:16.344301       1 publisher.go:186] Finished syncing namespace "azuredisk-5466" (2.950133ms)
I0903 20:37:16.348643       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-jsb4l.171173fe92f58f54, uid 4c6f9290-2422-461c-9195-96d723b63cc2, event type delete
I0903 20:37:16.352892       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-jsb4l.1711740165789432, uid 4c6f8aa6-90dd-4105-9de7-9c5fcafc07ab, event type delete
I0903 20:37:16.356266       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-jsb4l.17117401fbfaa068, uid b1116a1e-1734-424a-9c3c-1ff3bac613fb, event type delete
I0903 20:37:16.359649       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-jsb4l.1711740289db4e4e, uid d8e150ee-e1ff-4963-aaad-0a45c8268d88, event type delete
... skipping 130 lines ...
I0903 20:37:32.591029       1 pv_controller.go:1108] reclaimVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: policy is Delete
I0903 20:37:32.591039       1 pv_controller.go:1752] scheduleOperation[delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]]
I0903 20:37:32.591047       1 pv_controller.go:1763] operation "delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]" is already running, skipping
I0903 20:37:32.591123       1 pv_controller.go:1231] deleteVolumeOperation [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc] started
I0903 20:37:32.592743       1 pv_controller.go:1340] isVolumeReleased[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is released
I0903 20:37:32.592777       1 pv_controller.go:1404] doDeleteVolume [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]
I0903 20:37:32.622694       1 pv_controller.go:1259] deletion of volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:37:32.622721       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: set phase Failed
I0903 20:37:32.622731       1 pv_controller.go:858] updating PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: set phase Failed
I0903 20:37:32.626446       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" with version 1696
I0903 20:37:32.626477       1 pv_controller.go:879] volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" entered phase "Failed"
I0903 20:37:32.626508       1 pv_controller.go:901] volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:37:32.626556       1 goroutinemap.go:150] Operation for "delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]" failed. No retries permitted until 2022-09-03 20:37:33.126537465 +0000 UTC m=+549.546319121 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:37:32.626854       1 event.go:291] "Event occurred" object="pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:37:32.627141       1 pv_protection_controller.go:205] Got event on PV pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc
I0903 20:37:32.627177       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" with version 1696
I0903 20:37:32.627480       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase: Failed, bound to: "azuredisk-2790/pvc-nwgnb (uid: 70cb7213-e1c5-4aff-9f22-b8d7644cd5bc)", boundByController: true
I0903 20:37:32.627609       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is bound to claim azuredisk-2790/pvc-nwgnb
I0903 20:37:32.627743       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: claim azuredisk-2790/pvc-nwgnb not found
I0903 20:37:32.627856       1 pv_controller.go:1108] reclaimVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: policy is Delete
I0903 20:37:32.627979       1 pv_controller.go:1752] scheduleOperation[delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]]
I0903 20:37:32.628086       1 pv_controller.go:1765] operation "delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]" postponed due to exponential backoff
I0903 20:37:32.791238       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
... skipping 2 lines ...
I0903 20:37:34.420601       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:37:34.424754       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:37:34.456831       1 gc_controller.go:161] GC'ing orphaned
I0903 20:37:34.456859       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:37:34.505792       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:37:34.506016       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" with version 1696
I0903 20:37:34.506167       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase: Failed, bound to: "azuredisk-2790/pvc-nwgnb (uid: 70cb7213-e1c5-4aff-9f22-b8d7644cd5bc)", boundByController: true
I0903 20:37:34.506292       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is bound to claim azuredisk-2790/pvc-nwgnb
I0903 20:37:34.506418       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: claim azuredisk-2790/pvc-nwgnb not found
I0903 20:37:34.506519       1 pv_controller.go:1108] reclaimVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: policy is Delete
I0903 20:37:34.506628       1 pv_controller.go:1752] scheduleOperation[delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]]
I0903 20:37:34.506739       1 pv_controller.go:1231] deleteVolumeOperation [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc] started
I0903 20:37:34.512409       1 pv_controller.go:1340] isVolumeReleased[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is released
I0903 20:37:34.512432       1 pv_controller.go:1404] doDeleteVolume [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]
I0903 20:37:34.542408       1 pv_controller.go:1259] deletion of volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:37:34.542661       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: set phase Failed
I0903 20:37:34.542676       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase Failed already set
E0903 20:37:34.542707       1 goroutinemap.go:150] Operation for "delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]" failed. No retries permitted until 2022-09-03 20:37:35.542686638 +0000 UTC m=+551.962468294 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:37:35.026539       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:37:35.422077       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 22 items received
I0903 20:37:36.623838       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:37:36.623871       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:37:36.686203       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:37:36.686234       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc to the node "capz-obexd2-mp-0000000" mounted false
... skipping 6 lines ...
I0903 20:37:40.010155       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 20:37:43.453537       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="70.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41638" resp=200
I0903 20:37:48.402249       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 34 items received
I0903 20:37:49.421350       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:37:49.506147       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:37:49.506346       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" with version 1696
I0903 20:37:49.506466       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase: Failed, bound to: "azuredisk-2790/pvc-nwgnb (uid: 70cb7213-e1c5-4aff-9f22-b8d7644cd5bc)", boundByController: true
I0903 20:37:49.506506       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is bound to claim azuredisk-2790/pvc-nwgnb
I0903 20:37:49.506530       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: claim azuredisk-2790/pvc-nwgnb not found
I0903 20:37:49.506589       1 pv_controller.go:1108] reclaimVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: policy is Delete
I0903 20:37:49.506656       1 pv_controller.go:1752] scheduleOperation[delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]]
I0903 20:37:49.506703       1 pv_controller.go:1231] deleteVolumeOperation [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc] started
I0903 20:37:49.511256       1 pv_controller.go:1340] isVolumeReleased[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is released
I0903 20:37:49.511274       1 pv_controller.go:1404] doDeleteVolume [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]
I0903 20:37:49.511305       1 pv_controller.go:1259] deletion of volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) since it's in attaching or detaching state
I0903 20:37:49.511322       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: set phase Failed
I0903 20:37:49.511332       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase Failed already set
E0903 20:37:49.511358       1 goroutinemap.go:150] Operation for "delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]" failed. No retries permitted until 2022-09-03 20:37:51.511340983 +0000 UTC m=+567.931122639 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) since it's in attaching or detaching state
I0903 20:37:52.043070       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) returned with <nil>
I0903 20:37:52.043123       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc) succeeded
I0903 20:37:52.043135       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc was detached from node:capz-obexd2-mp-0000000
I0903 20:37:52.043306       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc") on node "capz-obexd2-mp-0000000" 
I0903 20:37:53.454389       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="83.001µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47972" resp=200
I0903 20:37:54.457561       1 gc_controller.go:161] GC'ing orphaned
... skipping 2 lines ...
I0903 20:38:03.454561       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="93.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34100" resp=200
I0903 20:38:03.661417       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0903 20:38:04.421567       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:38:04.425708       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:38:04.506372       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:38:04.506470       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" with version 1696
I0903 20:38:04.506633       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase: Failed, bound to: "azuredisk-2790/pvc-nwgnb (uid: 70cb7213-e1c5-4aff-9f22-b8d7644cd5bc)", boundByController: true
I0903 20:38:04.506732       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is bound to claim azuredisk-2790/pvc-nwgnb
I0903 20:38:04.506760       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: claim azuredisk-2790/pvc-nwgnb not found
I0903 20:38:04.506808       1 pv_controller.go:1108] reclaimVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: policy is Delete
I0903 20:38:04.506825       1 pv_controller.go:1752] scheduleOperation[delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]]
I0903 20:38:04.506910       1 pv_controller.go:1231] deleteVolumeOperation [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc] started
I0903 20:38:04.511132       1 pv_controller.go:1340] isVolumeReleased[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is released
... skipping 2 lines ...
I0903 20:38:05.501795       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc
I0903 20:38:05.501826       1 pv_controller.go:1435] volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" deleted
I0903 20:38:05.501839       1 pv_controller.go:1283] deleteVolumeOperation [pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: success
I0903 20:38:05.505183       1 pv_protection_controller.go:205] Got event on PV pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc
I0903 20:38:05.505213       1 pv_protection_controller.go:125] Processing PV pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc
I0903 20:38:05.505640       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" with version 1746
I0903 20:38:05.505713       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: phase: Failed, bound to: "azuredisk-2790/pvc-nwgnb (uid: 70cb7213-e1c5-4aff-9f22-b8d7644cd5bc)", boundByController: true
I0903 20:38:05.505787       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: volume is bound to claim azuredisk-2790/pvc-nwgnb
I0903 20:38:05.505846       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: claim azuredisk-2790/pvc-nwgnb not found
I0903 20:38:05.505861       1 pv_controller.go:1108] reclaimVolume[pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc]: policy is Delete
I0903 20:38:05.505877       1 pv_controller.go:1752] scheduleOperation[delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]]
I0903 20:38:05.505924       1 pv_controller.go:1763] operation "delete-pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc[de47ba41-5b8e-4c92-bbb5-8da70f18ca37]" is already running, skipping
I0903 20:38:05.517072       1 pv_controller_base.go:235] volume "pvc-70cb7213-e1c5-4aff-9f22-b8d7644cd5bc" deleted
... skipping 114 lines ...
I0903 20:38:13.060392       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-pp72q.17117427066ede85, uid 65153a4b-a959-4121-8007-6c7b10e4fbe2, event type delete
I0903 20:38:13.062434       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name pvc-nwgnb.171174232c693f29, uid 7485c005-7986-4a18-a7f1-556dd09f587f, event type delete
I0903 20:38:13.065355       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name pvc-nwgnb.17117423ca6a2f51, uid e352fadd-9fea-4746-9a71-5d0ce7a5cbdb, event type delete
I0903 20:38:13.073670       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2790, name kube-root-ca.crt, uid b36cc544-327f-44e1-bbb4-4823184c6018, event type delete
I0903 20:38:13.077209       1 publisher.go:186] Finished syncing namespace "azuredisk-2790" (3.499751ms)
I0903 20:38:13.099114       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-jknms, uid d5063c42-28b1-472a-aa52-a6951b834af7, event type delete
E0903 20:38:13.114741       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-p44b7" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0903 20:38:13.158009       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
I0903 20:38:13.158062       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2790, name default, uid f467d35d-b3a8-4457-b37a-bb4507b126b8, event type delete
I0903 20:38:13.158302       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (1.9µs)
I0903 20:38:13.186963       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (4µs)
I0903 20:38:13.187257       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2790, estimate: 0, errors: <nil>
I0903 20:38:13.196278       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2790" (176.226016ms)
... skipping 122 lines ...
I0903 20:38:31.254996       1 pv_controller.go:1108] reclaimVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: policy is Delete
I0903 20:38:31.255005       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]]
I0903 20:38:31.255012       1 pv_controller.go:1763] operation "delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]" is already running, skipping
I0903 20:38:31.255035       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef] started
I0903 20:38:31.256778       1 pv_controller.go:1340] isVolumeReleased[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is released
I0903 20:38:31.256795       1 pv_controller.go:1404] doDeleteVolume [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]
I0903 20:38:31.287563       1 pv_controller.go:1259] deletion of volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:38:31.287584       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: set phase Failed
I0903 20:38:31.287592       1 pv_controller.go:858] updating PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: set phase Failed
I0903 20:38:31.290486       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" with version 1842
I0903 20:38:31.290725       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase: Failed, bound to: "azuredisk-5356/pvc-ljjqm (uid: 6ae6fd70-0cca-4723-ab39-465c7b98cfef)", boundByController: true
I0903 20:38:31.290957       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is bound to claim azuredisk-5356/pvc-ljjqm
I0903 20:38:31.290983       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: claim azuredisk-5356/pvc-ljjqm not found
I0903 20:38:31.290990       1 pv_controller.go:1108] reclaimVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: policy is Delete
I0903 20:38:31.291034       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]]
I0903 20:38:31.291043       1 pv_controller.go:1763] operation "delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]" is already running, skipping
I0903 20:38:31.291215       1 pv_protection_controller.go:205] Got event on PV pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef
I0903 20:38:31.291303       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" with version 1842
I0903 20:38:31.291423       1 pv_controller.go:879] volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" entered phase "Failed"
I0903 20:38:31.291468       1 pv_controller.go:901] volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:38:31.291513       1 goroutinemap.go:150] Operation for "delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]" failed. No retries permitted until 2022-09-03 20:38:31.791494817 +0000 UTC m=+608.211276473 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:38:31.291708       1 event.go:291] "Event occurred" object="pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:38:32.892160       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-obexd2-mp-0000000, refreshing the cache
I0903 20:38:33.454515       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="61.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:42118" resp=200
I0903 20:38:34.414813       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:38:34.414813       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:38:34.422074       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 4 lines ...
I0903 20:38:34.435670       1 controller.go:804] Finished updateLoadBalancerHosts
I0903 20:38:34.435677       1 controller.go:731] It took 4.28e-05 seconds to finish nodeSyncInternal
I0903 20:38:34.459250       1 gc_controller.go:161] GC'ing orphaned
I0903 20:38:34.459276       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:38:34.507634       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:38:34.507724       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" with version 1842
I0903 20:38:34.507832       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase: Failed, bound to: "azuredisk-5356/pvc-ljjqm (uid: 6ae6fd70-0cca-4723-ab39-465c7b98cfef)", boundByController: true
I0903 20:38:34.507937       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is bound to claim azuredisk-5356/pvc-ljjqm
I0903 20:38:34.507987       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: claim azuredisk-5356/pvc-ljjqm not found
I0903 20:38:34.508000       1 pv_controller.go:1108] reclaimVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: policy is Delete
I0903 20:38:34.508017       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]]
I0903 20:38:34.508093       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef] started
I0903 20:38:34.515632       1 pv_controller.go:1340] isVolumeReleased[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is released
I0903 20:38:34.515651       1 pv_controller.go:1404] doDeleteVolume [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]
I0903 20:38:34.554423       1 pv_controller.go:1259] deletion of volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:38:34.554445       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: set phase Failed
I0903 20:38:34.554454       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase Failed already set
E0903 20:38:34.554480       1 goroutinemap.go:150] Operation for "delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]" failed. No retries permitted until 2022-09-03 20:38:35.554462893 +0000 UTC m=+611.974244649 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:38:34.677974       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0903 20:38:35.052539       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:38:36.000858       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 20:38:36.000759249 +0000 UTC m=+612.420541905"
I0903 20:38:36.001864       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.091715ms"
I0903 20:38:36.694927       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:38:36.695077       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef to the node "capz-obexd2-mp-0000000" mounted false
... skipping 9 lines ...
I0903 20:38:37.001643       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="821.311µs"
I0903 20:38:40.019305       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 20:38:43.454451       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="60.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47852" resp=200
I0903 20:38:49.422894       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:38:49.508312       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:38:49.508385       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" with version 1842
I0903 20:38:49.508425       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase: Failed, bound to: "azuredisk-5356/pvc-ljjqm (uid: 6ae6fd70-0cca-4723-ab39-465c7b98cfef)", boundByController: true
I0903 20:38:49.508461       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is bound to claim azuredisk-5356/pvc-ljjqm
I0903 20:38:49.508483       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: claim azuredisk-5356/pvc-ljjqm not found
I0903 20:38:49.508492       1 pv_controller.go:1108] reclaimVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: policy is Delete
I0903 20:38:49.508509       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]]
I0903 20:38:49.508535       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef] started
I0903 20:38:49.515898       1 pv_controller.go:1340] isVolumeReleased[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is released
I0903 20:38:49.515918       1 pv_controller.go:1404] doDeleteVolume [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]
I0903 20:38:49.516087       1 pv_controller.go:1259] deletion of volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) since it's in attaching or detaching state
I0903 20:38:49.516106       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: set phase Failed
I0903 20:38:49.516115       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase Failed already set
E0903 20:38:49.516234       1 goroutinemap.go:150] Operation for "delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]" failed. No retries permitted until 2022-09-03 20:38:51.516208992 +0000 UTC m=+627.935990648 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) since it's in attaching or detaching state
I0903 20:38:53.454718       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47466" resp=200
I0903 20:38:53.499084       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) returned with <nil>
I0903 20:38:53.499142       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef) succeeded
I0903 20:38:53.499154       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef was detached from node:capz-obexd2-mp-0000000
I0903 20:38:53.499176       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef") on node "capz-obexd2-mp-0000000" 
I0903 20:38:54.460345       1 gc_controller.go:161] GC'ing orphaned
... skipping 2 lines ...
I0903 20:39:02.919402       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-akmlf8" (10.5µs)
I0903 20:39:03.454338       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="181.303µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:33538" resp=200
I0903 20:39:04.422992       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:39:04.427231       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:39:04.508814       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:39:04.509018       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" with version 1842
I0903 20:39:04.509169       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase: Failed, bound to: "azuredisk-5356/pvc-ljjqm (uid: 6ae6fd70-0cca-4723-ab39-465c7b98cfef)", boundByController: true
I0903 20:39:04.509208       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is bound to claim azuredisk-5356/pvc-ljjqm
I0903 20:39:04.509234       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: claim azuredisk-5356/pvc-ljjqm not found
I0903 20:39:04.509249       1 pv_controller.go:1108] reclaimVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: policy is Delete
I0903 20:39:04.509267       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]]
I0903 20:39:04.509308       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef] started
I0903 20:39:04.518545       1 pv_controller.go:1340] isVolumeReleased[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is released
... skipping 2 lines ...
I0903 20:39:09.892836       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef
I0903 20:39:09.892902       1 pv_controller.go:1435] volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" deleted
I0903 20:39:09.892934       1 pv_controller.go:1283] deleteVolumeOperation [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: success
I0903 20:39:09.899729       1 pv_protection_controller.go:205] Got event on PV pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef
I0903 20:39:09.899847       1 pv_protection_controller.go:125] Processing PV pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef
I0903 20:39:09.899748       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" with version 1902
I0903 20:39:09.900062       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: phase: Failed, bound to: "azuredisk-5356/pvc-ljjqm (uid: 6ae6fd70-0cca-4723-ab39-465c7b98cfef)", boundByController: true
I0903 20:39:09.900180       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: volume is bound to claim azuredisk-5356/pvc-ljjqm
I0903 20:39:09.900294       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: claim azuredisk-5356/pvc-ljjqm not found
I0903 20:39:09.900383       1 pv_controller.go:1108] reclaimVolume[pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef]: policy is Delete
I0903 20:39:09.900490       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef[ead31b5c-0838-49d9-a53d-a680e3b32092]]
I0903 20:39:09.900615       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef] started
I0903 20:39:09.903436       1 pv_controller.go:1243] Volume "pvc-6ae6fd70-0cca-4723-ab39-465c7b98cfef" is already being deleted
... skipping 44 lines ...
I0903 20:39:16.740655       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-tfvbt.171174347a98a09e, uid 3892b305-0688-49a7-9c2e-08c45fb4fa33, event type delete
I0903 20:39:16.743475       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-tfvbt.171174347da88f90, uid b06eb180-3de5-4cc7-b72b-b6291d83db04, event type delete
I0903 20:39:16.746325       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-tfvbt.17117434855088ad, uid 1d68897b-2017-4cdc-8a6b-2adf53811093, event type delete
I0903 20:39:16.753657       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name pvc-ljjqm.171174305ef60148, uid e0d666c2-8478-4807-8e7c-45eef32669ee, event type delete
I0903 20:39:16.756424       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name pvc-ljjqm.1711743115856631, uid 55ab386a-6e3a-4f66-afbd-4ae114fa5064, event type delete
I0903 20:39:16.767536       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-hc4rb, uid 89f1e436-ca83-4f46-a780-453b7c06043f, event type delete
E0903 20:39:16.780867       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-9gsx5" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0903 20:39:16.827276       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid a9a6f72e-9940-4b4e-bf27-10399673f658, event type delete
I0903 20:39:16.830156       1 publisher.go:186] Finished syncing namespace "azuredisk-5356" (2.662237ms)
I0903 20:39:16.833550       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid a791f80b-6090-4a69-8a8f-2c6e764aeaef, event type delete
I0903 20:39:16.833762       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (1.9µs)
I0903 20:39:16.834205       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0903 20:39:16.868732       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (2.1µs)
... skipping 750 lines ...
I0903 20:40:47.044828       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: claim azuredisk-5194/pvc-tfmm6 not found
I0903 20:40:47.044906       1 pv_controller.go:1108] reclaimVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: policy is Delete
I0903 20:40:47.045004       1 pv_controller.go:1752] scheduleOperation[delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]]
I0903 20:40:47.045096       1 pv_controller.go:1763] operation "delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]" is already running, skipping
I0903 20:40:47.049796       1 pv_controller.go:1340] isVolumeReleased[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is released
I0903 20:40:47.049813       1 pv_controller.go:1404] doDeleteVolume [pvc-072a8398-a5d1-4915-99c2-470203f38b81]
I0903 20:40:47.082213       1 pv_controller.go:1259] deletion of volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:40:47.082234       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: set phase Failed
I0903 20:40:47.082243       1 pv_controller.go:858] updating PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: set phase Failed
I0903 20:40:47.085643       1 pv_protection_controller.go:205] Got event on PV pvc-072a8398-a5d1-4915-99c2-470203f38b81
I0903 20:40:47.085834       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" with version 2145
I0903 20:40:47.085872       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase: Failed, bound to: "azuredisk-5194/pvc-tfmm6 (uid: 072a8398-a5d1-4915-99c2-470203f38b81)", boundByController: true
I0903 20:40:47.086029       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is bound to claim azuredisk-5194/pvc-tfmm6
I0903 20:40:47.086097       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: claim azuredisk-5194/pvc-tfmm6 not found
I0903 20:40:47.086110       1 pv_controller.go:1108] reclaimVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: policy is Delete
I0903 20:40:47.086125       1 pv_controller.go:1752] scheduleOperation[delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]]
I0903 20:40:47.086179       1 pv_controller.go:1763] operation "delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]" is already running, skipping
I0903 20:40:47.087192       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" with version 2145
I0903 20:40:47.087216       1 pv_controller.go:879] volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" entered phase "Failed"
I0903 20:40:47.087340       1 pv_controller.go:901] volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:40:47.087435       1 goroutinemap.go:150] Operation for "delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]" failed. No retries permitted until 2022-09-03 20:40:47.58737127 +0000 UTC m=+744.007152926 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:40:47.087706       1 event.go:291] "Event occurred" object="pvc-072a8398-a5d1-4915-99c2-470203f38b81" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:40:49.427441       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:40:49.516453       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:40:49.516706       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" with version 1941
I0903 20:40:49.516743       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase: Bound, bound to: "azuredisk-5194/pvc-qcpdx (uid: e1990edf-8a88-417c-81c0-224719e387db)", boundByController: true
I0903 20:40:49.516750       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-qcpdx" with version 1943
... skipping 28 lines ...
I0903 20:40:49.516995       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-lgrjn]: volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" found: phase: Bound, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:40:49.516997       1 pv_controller.go:861] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase Bound already set
I0903 20:40:49.517004       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-lgrjn]: claim is already correctly bound
I0903 20:40:49.517010       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" with version 2145
I0903 20:40:49.517011       1 pv_controller.go:1012] binding volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" to claim "azuredisk-5194/pvc-lgrjn"
I0903 20:40:49.517020       1 pv_controller.go:910] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: binding to "azuredisk-5194/pvc-lgrjn"
I0903 20:40:49.517029       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase: Failed, bound to: "azuredisk-5194/pvc-tfmm6 (uid: 072a8398-a5d1-4915-99c2-470203f38b81)", boundByController: true
I0903 20:40:49.517034       1 pv_controller.go:922] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: already bound to "azuredisk-5194/pvc-lgrjn"
I0903 20:40:49.517040       1 pv_controller.go:858] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: set phase Bound
I0903 20:40:49.517048       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is bound to claim azuredisk-5194/pvc-tfmm6
I0903 20:40:49.517048       1 pv_controller.go:861] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase Bound already set
I0903 20:40:49.517056       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-lgrjn]: binding to "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369"
I0903 20:40:49.517065       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: claim azuredisk-5194/pvc-tfmm6 not found
... skipping 5 lines ...
I0903 20:40:49.517106       1 pv_controller.go:1038] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" bound to claim "azuredisk-5194/pvc-lgrjn"
I0903 20:40:49.517109       1 pv_controller.go:1231] deleteVolumeOperation [pvc-072a8398-a5d1-4915-99c2-470203f38b81] started
I0903 20:40:49.517121       1 pv_controller.go:1039] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:40:49.517132       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-lgrjn" status after binding: phase: Bound, bound to: "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369", bindCompleted: true, boundByController: true
I0903 20:40:49.522085       1 pv_controller.go:1340] isVolumeReleased[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is released
I0903 20:40:49.522105       1 pv_controller.go:1404] doDeleteVolume [pvc-072a8398-a5d1-4915-99c2-470203f38b81]
I0903 20:40:49.551538       1 pv_controller.go:1259] deletion of volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:40:49.551562       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: set phase Failed
I0903 20:40:49.551571       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase Failed already set
E0903 20:40:49.551738       1 goroutinemap.go:150] Operation for "delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]" failed. No retries permitted until 2022-09-03 20:40:50.551580887 +0000 UTC m=+746.971362543 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:40:53.453851       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.002µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55262" resp=200
I0903 20:40:54.464822       1 gc_controller.go:161] GC'ing orphaned
I0903 20:40:54.464885       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:40:56.895217       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:40:56.895407       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81 to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:40:56.895497       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 to the node "capz-obexd2-mp-0000000" mounted true
... skipping 54 lines ...
I0903 20:41:04.519391       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: all is bound
I0903 20:41:04.519401       1 pv_controller.go:858] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: set phase Bound
I0903 20:41:04.519462       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-lgrjn] status: phase Bound already set
I0903 20:41:04.519483       1 pv_controller.go:1038] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" bound to claim "azuredisk-5194/pvc-lgrjn"
I0903 20:41:04.519543       1 pv_controller.go:861] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase Bound already set
I0903 20:41:04.519561       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" with version 2145
I0903 20:41:04.519586       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase: Failed, bound to: "azuredisk-5194/pvc-tfmm6 (uid: 072a8398-a5d1-4915-99c2-470203f38b81)", boundByController: true
I0903 20:41:04.519663       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is bound to claim azuredisk-5194/pvc-tfmm6
I0903 20:41:04.519731       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: claim azuredisk-5194/pvc-tfmm6 not found
I0903 20:41:04.519746       1 pv_controller.go:1108] reclaimVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: policy is Delete
I0903 20:41:04.519627       1 pv_controller.go:1039] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:41:04.519825       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-lgrjn" status after binding: phase: Bound, bound to: "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369", bindCompleted: true, boundByController: true
I0903 20:41:04.519843       1 pv_controller.go:1752] scheduleOperation[delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]]
I0903 20:41:04.519910       1 pv_controller.go:1231] deleteVolumeOperation [pvc-072a8398-a5d1-4915-99c2-470203f38b81] started
I0903 20:41:04.526950       1 pv_controller.go:1340] isVolumeReleased[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is released
I0903 20:41:04.526968       1 pv_controller.go:1404] doDeleteVolume [pvc-072a8398-a5d1-4915-99c2-470203f38b81]
I0903 20:41:04.527024       1 pv_controller.go:1259] deletion of volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) since it's in attaching or detaching state
I0903 20:41:04.527040       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: set phase Failed
I0903 20:41:04.527048       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase Failed already set
E0903 20:41:04.527097       1 goroutinemap.go:150] Operation for "delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]" failed. No retries permitted until 2022-09-03 20:41:06.527056157 +0000 UTC m=+762.946837913 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) since it's in attaching or detaching state
I0903 20:41:05.130928       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:41:09.645273       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0903 20:41:13.112682       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) returned with <nil>
I0903 20:41:13.112734       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81) succeeded
I0903 20:41:13.112745       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81 was detached from node:capz-obexd2-mp-0000000
I0903 20:41:13.112768       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81") on node "capz-obexd2-mp-0000000" 
... skipping 47 lines ...
I0903 20:41:19.519564       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: volume is bound to claim azuredisk-5194/pvc-lgrjn
I0903 20:41:19.519597       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: claim azuredisk-5194/pvc-lgrjn found: phase: Bound, bound to: "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369", bindCompleted: true, boundByController: true
I0903 20:41:19.519618       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: all is bound
I0903 20:41:19.519627       1 pv_controller.go:858] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: set phase Bound
I0903 20:41:19.519637       1 pv_controller.go:861] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase Bound already set
I0903 20:41:19.519679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" with version 2145
I0903 20:41:19.519708       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase: Failed, bound to: "azuredisk-5194/pvc-tfmm6 (uid: 072a8398-a5d1-4915-99c2-470203f38b81)", boundByController: true
I0903 20:41:19.519756       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is bound to claim azuredisk-5194/pvc-tfmm6
I0903 20:41:19.519800       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: claim azuredisk-5194/pvc-tfmm6 not found
I0903 20:41:19.519817       1 pv_controller.go:1108] reclaimVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: policy is Delete
I0903 20:41:19.519833       1 pv_controller.go:1752] scheduleOperation[delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]]
I0903 20:41:19.519899       1 pv_controller.go:1231] deleteVolumeOperation [pvc-072a8398-a5d1-4915-99c2-470203f38b81] started
I0903 20:41:19.525621       1 pv_controller.go:1340] isVolumeReleased[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is released
... skipping 2 lines ...
I0903 20:41:24.848030       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-072a8398-a5d1-4915-99c2-470203f38b81
I0903 20:41:24.848063       1 pv_controller.go:1435] volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" deleted
I0903 20:41:24.848077       1 pv_controller.go:1283] deleteVolumeOperation [pvc-072a8398-a5d1-4915-99c2-470203f38b81]: success
I0903 20:41:24.856957       1 pv_protection_controller.go:205] Got event on PV pvc-072a8398-a5d1-4915-99c2-470203f38b81
I0903 20:41:24.857056       1 pv_protection_controller.go:125] Processing PV pvc-072a8398-a5d1-4915-99c2-470203f38b81
I0903 20:41:24.857481       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" with version 2204
I0903 20:41:24.857592       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: phase: Failed, bound to: "azuredisk-5194/pvc-tfmm6 (uid: 072a8398-a5d1-4915-99c2-470203f38b81)", boundByController: true
I0903 20:41:24.857668       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: volume is bound to claim azuredisk-5194/pvc-tfmm6
I0903 20:41:24.857728       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: claim azuredisk-5194/pvc-tfmm6 not found
I0903 20:41:24.857752       1 pv_controller.go:1108] reclaimVolume[pvc-072a8398-a5d1-4915-99c2-470203f38b81]: policy is Delete
I0903 20:41:24.857803       1 pv_controller.go:1752] scheduleOperation[delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]]
I0903 20:41:24.857835       1 pv_controller.go:1763] operation "delete-pvc-072a8398-a5d1-4915-99c2-470203f38b81[93a9e2d7-b985-4f16-891c-e2450968c0ca]" is already running, skipping
I0903 20:41:24.861299       1 pv_controller_base.go:235] volume "pvc-072a8398-a5d1-4915-99c2-470203f38b81" deleted
... skipping 202 lines ...
I0903 20:41:57.199436       1 pv_controller.go:1404] doDeleteVolume [pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]
I0903 20:41:57.233652       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 on node "capz-obexd2-mp-0000000"
I0903 20:41:57.240465       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:41:57.241223       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:41:57.240670       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-obexd2-mp-0000000" succeeded. VolumesAttached: []
I0903 20:41:57.241292       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369") on node "capz-obexd2-mp-0000000" 
I0903 20:41:57.243979       1 pv_controller.go:1259] deletion of volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:41:57.244130       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: set phase Failed
I0903 20:41:57.244265       1 pv_controller.go:858] updating PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: set phase Failed
I0903 20:41:57.248858       1 operation_generator.go:1599] Verified volume is safe to detach for volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369") on node "capz-obexd2-mp-0000000" 
I0903 20:41:57.250165       1 pv_protection_controller.go:205] Got event on PV pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369
I0903 20:41:57.250163       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" with version 2266
I0903 20:41:57.250197       1 pv_controller.go:879] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" entered phase "Failed"
I0903 20:41:57.250208       1 pv_controller.go:901] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:41:57.250257       1 goroutinemap.go:150] Operation for "delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]" failed. No retries permitted until 2022-09-03 20:41:57.750226867 +0000 UTC m=+814.170008623 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:41:57.250471       1 event.go:291] "Event occurred" object="pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:41:57.250875       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" with version 2266
I0903 20:41:57.251059       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase: Failed, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:41:57.251229       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: volume is bound to claim azuredisk-5194/pvc-lgrjn
I0903 20:41:57.251371       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: claim azuredisk-5194/pvc-lgrjn not found
I0903 20:41:57.251511       1 pv_controller.go:1108] reclaimVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: policy is Delete
I0903 20:41:57.251654       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]]
I0903 20:41:57.251781       1 pv_controller.go:1765] operation "delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]" postponed due to exponential backoff
I0903 20:41:57.262863       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 from node "capz-obexd2-mp-0000000"
... skipping 26 lines ...
I0903 20:42:04.520563       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is bound to claim azuredisk-5194/pvc-qcpdx
I0903 20:42:04.521204       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: claim azuredisk-5194/pvc-qcpdx found: phase: Bound, bound to: "pvc-e1990edf-8a88-417c-81c0-224719e387db", bindCompleted: true, boundByController: true
I0903 20:42:04.521225       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: all is bound
I0903 20:42:04.521248       1 pv_controller.go:858] updating PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: set phase Bound
I0903 20:42:04.521262       1 pv_controller.go:861] updating PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase Bound already set
I0903 20:42:04.521280       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" with version 2266
I0903 20:42:04.521303       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase: Failed, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:42:04.521349       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: volume is bound to claim azuredisk-5194/pvc-lgrjn
I0903 20:42:04.521373       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: claim azuredisk-5194/pvc-lgrjn not found
I0903 20:42:04.521381       1 pv_controller.go:1108] reclaimVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: policy is Delete
I0903 20:42:04.521413       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]]
I0903 20:42:04.521493       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369] started
I0903 20:42:04.526626       1 pv_controller.go:1340] isVolumeReleased[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: volume is released
I0903 20:42:04.526646       1 pv_controller.go:1404] doDeleteVolume [pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]
I0903 20:42:04.526681       1 pv_controller.go:1259] deletion of volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) since it's in attaching or detaching state
I0903 20:42:04.526695       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: set phase Failed
I0903 20:42:04.526704       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase Failed already set
E0903 20:42:04.526731       1 goroutinemap.go:150] Operation for "delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]" failed. No retries permitted until 2022-09-03 20:42:05.526713361 +0000 UTC m=+821.946495017 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) since it's in attaching or detaching state
I0903 20:42:05.168618       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:42:07.387272       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:42:07.673514       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) returned with <nil>
I0903 20:42:07.673746       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369) succeeded
I0903 20:42:07.673764       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369 was detached from node:capz-obexd2-mp-0000000
I0903 20:42:07.673855       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369") on node "capz-obexd2-mp-0000000" 
... skipping 11 lines ...
I0903 20:42:19.520950       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is bound to claim azuredisk-5194/pvc-qcpdx
I0903 20:42:19.521007       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: claim azuredisk-5194/pvc-qcpdx found: phase: Bound, bound to: "pvc-e1990edf-8a88-417c-81c0-224719e387db", bindCompleted: true, boundByController: true
I0903 20:42:19.521084       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: all is bound
I0903 20:42:19.521138       1 pv_controller.go:858] updating PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: set phase Bound
I0903 20:42:19.521179       1 pv_controller.go:861] updating PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase Bound already set
I0903 20:42:19.521227       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" with version 2266
I0903 20:42:19.521301       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase: Failed, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:42:19.521383       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: volume is bound to claim azuredisk-5194/pvc-lgrjn
I0903 20:42:19.521447       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: claim azuredisk-5194/pvc-lgrjn not found
I0903 20:42:19.521486       1 pv_controller.go:1108] reclaimVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: policy is Delete
I0903 20:42:19.521536       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]]
I0903 20:42:19.521617       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369] started
I0903 20:42:19.521105       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-qcpdx" with version 1943
... skipping 18 lines ...
I0903 20:42:24.880492       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369
I0903 20:42:24.880524       1 pv_controller.go:1435] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" deleted
I0903 20:42:24.880534       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: success
I0903 20:42:24.890552       1 pv_protection_controller.go:205] Got event on PV pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369
I0903 20:42:24.890584       1 pv_protection_controller.go:125] Processing PV pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369
I0903 20:42:24.890876       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" with version 2304
I0903 20:42:24.890909       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: phase: Failed, bound to: "azuredisk-5194/pvc-lgrjn (uid: a47c4fa0-18b2-4380-ae0b-f02e35c94369)", boundByController: true
I0903 20:42:24.890935       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: volume is bound to claim azuredisk-5194/pvc-lgrjn
I0903 20:42:24.890950       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: claim azuredisk-5194/pvc-lgrjn not found
I0903 20:42:24.890957       1 pv_controller.go:1108] reclaimVolume[pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369]: policy is Delete
I0903 20:42:24.890971       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]]
I0903 20:42:24.890977       1 pv_controller.go:1763] operation "delete-pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369[715d926a-c36b-48eb-ad2b-c8599627dc2c]" is already running, skipping
I0903 20:42:24.894806       1 pv_controller_base.go:235] volume "pvc-a47c4fa0-18b2-4380-ae0b-f02e35c94369" deleted
... skipping 148 lines ...
I0903 20:42:57.512145       1 pv_controller.go:1108] reclaimVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: policy is Delete
I0903 20:42:57.512157       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]]
I0903 20:42:57.512165       1 pv_controller.go:1763] operation "delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]" is already running, skipping
I0903 20:42:57.513714       1 pv_controller.go:1340] isVolumeReleased[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is released
I0903 20:42:57.513854       1 pv_controller.go:1404] doDeleteVolume [pvc-e1990edf-8a88-417c-81c0-224719e387db]
I0903 20:42:57.545261       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db on node "capz-obexd2-mp-0000001"
I0903 20:42:57.557671       1 pv_controller.go:1259] deletion of volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted
I0903 20:42:57.557703       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e1990edf-8a88-417c-81c0-224719e387db]: set phase Failed
I0903 20:42:57.557714       1 pv_controller.go:858] updating PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: set phase Failed
I0903 20:42:57.561697       1 pv_protection_controller.go:205] Got event on PV pvc-e1990edf-8a88-417c-81c0-224719e387db
I0903 20:42:57.561697       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" with version 2367
I0903 20:42:57.561730       1 pv_controller.go:879] volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" entered phase "Failed"
I0903 20:42:57.561766       1 pv_controller.go:901] volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted
E0903 20:42:57.561849       1 goroutinemap.go:150] Operation for "delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]" failed. No retries permitted until 2022-09-03 20:42:58.061808047 +0000 UTC m=+874.481589803 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted
I0903 20:42:57.562141       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" with version 2367
I0903 20:42:57.562288       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase: Failed, bound to: "azuredisk-5194/pvc-qcpdx (uid: e1990edf-8a88-417c-81c0-224719e387db)", boundByController: true
I0903 20:42:57.562321       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is bound to claim azuredisk-5194/pvc-qcpdx
I0903 20:42:57.562341       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: claim azuredisk-5194/pvc-qcpdx not found
I0903 20:42:57.562352       1 pv_controller.go:1108] reclaimVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: policy is Delete
I0903 20:42:57.562366       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]]
I0903 20:42:57.562396       1 pv_controller.go:1765] operation "delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]" postponed due to exponential backoff
I0903 20:42:57.562146       1 event.go:291] "Event occurred" object="pvc-e1990edf-8a88-417c-81c0-224719e387db" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_1), could not be deleted"
... skipping 10 lines ...
I0903 20:43:03.514143       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db"
I0903 20:43:03.514250       1 azure_controller_vmss.go:175] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db)
I0903 20:43:04.431451       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:43:04.433590       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:43:04.522283       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:43:04.522352       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" with version 2367
I0903 20:43:04.522391       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase: Failed, bound to: "azuredisk-5194/pvc-qcpdx (uid: e1990edf-8a88-417c-81c0-224719e387db)", boundByController: true
I0903 20:43:04.522428       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is bound to claim azuredisk-5194/pvc-qcpdx
I0903 20:43:04.522449       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: claim azuredisk-5194/pvc-qcpdx not found
I0903 20:43:04.522458       1 pv_controller.go:1108] reclaimVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: policy is Delete
I0903 20:43:04.522477       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]]
I0903 20:43:04.522504       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e1990edf-8a88-417c-81c0-224719e387db] started
I0903 20:43:04.526938       1 pv_controller.go:1340] isVolumeReleased[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is released
I0903 20:43:04.526957       1 pv_controller.go:1404] doDeleteVolume [pvc-e1990edf-8a88-417c-81c0-224719e387db]
I0903 20:43:04.527012       1 pv_controller.go:1259] deletion of volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db) since it's in attaching or detaching state
I0903 20:43:04.527032       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e1990edf-8a88-417c-81c0-224719e387db]: set phase Failed
I0903 20:43:04.527042       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase Failed already set
E0903 20:43:04.527089       1 goroutinemap.go:150] Operation for "delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]" failed. No retries permitted until 2022-09-03 20:43:05.527051756 +0000 UTC m=+881.946833412 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db) since it's in attaching or detaching state
I0903 20:43:05.062749       1 node_lifecycle_controller.go:1047] Node capz-obexd2-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:43:05.193863       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:43:05.254221       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 22 items received
I0903 20:43:05.419490       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0903 20:43:08.434681       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 0 items received
I0903 20:43:09.890303       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 8 lines ...
I0903 20:43:14.469427       1 gc_controller.go:161] GC'ing orphaned
I0903 20:43:14.469460       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:43:15.427443       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0903 20:43:19.434301       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:43:19.523025       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:43:19.523096       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" with version 2367
I0903 20:43:19.523241       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase: Failed, bound to: "azuredisk-5194/pvc-qcpdx (uid: e1990edf-8a88-417c-81c0-224719e387db)", boundByController: true
I0903 20:43:19.523281       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is bound to claim azuredisk-5194/pvc-qcpdx
I0903 20:43:19.523307       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: claim azuredisk-5194/pvc-qcpdx not found
I0903 20:43:19.523318       1 pv_controller.go:1108] reclaimVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: policy is Delete
I0903 20:43:19.523336       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]]
I0903 20:43:19.523407       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e1990edf-8a88-417c-81c0-224719e387db] started
I0903 20:43:19.528231       1 pv_controller.go:1340] isVolumeReleased[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is released
... skipping 3 lines ...
I0903 20:43:24.806922       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-e1990edf-8a88-417c-81c0-224719e387db
I0903 20:43:24.806954       1 pv_controller.go:1435] volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" deleted
I0903 20:43:24.806967       1 pv_controller.go:1283] deleteVolumeOperation [pvc-e1990edf-8a88-417c-81c0-224719e387db]: success
I0903 20:43:24.814854       1 pv_protection_controller.go:205] Got event on PV pvc-e1990edf-8a88-417c-81c0-224719e387db
I0903 20:43:24.814887       1 pv_protection_controller.go:125] Processing PV pvc-e1990edf-8a88-417c-81c0-224719e387db
I0903 20:43:24.815042       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" with version 2408
I0903 20:43:24.815103       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: phase: Failed, bound to: "azuredisk-5194/pvc-qcpdx (uid: e1990edf-8a88-417c-81c0-224719e387db)", boundByController: true
I0903 20:43:24.815135       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: volume is bound to claim azuredisk-5194/pvc-qcpdx
I0903 20:43:24.815155       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: claim azuredisk-5194/pvc-qcpdx not found
I0903 20:43:24.815186       1 pv_controller.go:1108] reclaimVolume[pvc-e1990edf-8a88-417c-81c0-224719e387db]: policy is Delete
I0903 20:43:24.815207       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]]
I0903 20:43:24.815217       1 pv_controller.go:1763] operation "delete-pvc-e1990edf-8a88-417c-81c0-224719e387db[b836a363-c834-483c-b438-a93587a99177]" is already running, skipping
I0903 20:43:24.819569       1 pv_controller_base.go:235] volume "pvc-e1990edf-8a88-417c-81c0-224719e387db" deleted
... skipping 41 lines ...
I0903 20:43:27.721749       1 pv_controller.go:1485] provisionClaimOperation [azuredisk-1353/pvc-rdhpv] started, class: "azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-wkpnb"
I0903 20:43:27.721819       1 pv_controller.go:1500] provisionClaimOperation [azuredisk-1353/pvc-rdhpv]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0903 20:43:27.722081       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-rdhpv"
I0903 20:43:27.725688       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4" (20.725982ms)
I0903 20:43:27.725993       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4", timestamp:time.Time{wall:0xc0bd0cfbea0c5e71, ext:904125235337, loc:(*time.Location)(0x751a1a0)}}
I0903 20:43:27.726207       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" duration="26.799265ms"
I0903 20:43:27.726242       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-6wh9p\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:43:27.726279       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" startTime="2022-09-03 20:43:27.726260965 +0000 UTC m=+904.146042721"
I0903 20:43:27.726410       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 20:43:27.726717       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-6wh9p" timed out (false) [last progress check: 2022-09-03 20:43:27 +0000 UTC - now: 2022-09-03 20:43:27.726710971 +0000 UTC m=+904.146492627]
I0903 20:43:27.727062       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4"
I0903 20:43:27.732096       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-rdhpv" with version 2434
I0903 20:43:27.732145       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-rdhpv]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 123 lines ...
I0903 20:43:32.198397       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-lgrjn.171174459a001592, uid 8fde1135-b30e-44d7-a500-cdc982d3faf5, event type delete
I0903 20:43:32.201651       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-qcpdx.1711743f3860b6ba, uid 508bad75-ddec-46b4-8f61-3dc3f7a4d85c, event type delete
I0903 20:43:32.206543       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-qcpdx.171174407a299d57, uid 648a07e6-7cae-4b37-99cf-602aa468e8eb, event type delete
I0903 20:43:32.215912       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-tfmm6.171174498bbe57c7, uid c1f4464b-f98c-4242-8fd5-a74839448faf, event type delete
I0903 20:43:32.224816       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-tfmm6.1711744a4988d001, uid fc429a8b-4867-4be3-9e8b-fde9dbd541b9, event type delete
I0903 20:43:32.266485       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-hp5tm, uid 1f34bf25-3473-49cf-b503-7a99628426de, event type delete
E0903 20:43:32.281302       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-wcsbp" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0903 20:43:32.317496       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0903 20:43:32.317728       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid 14d3c3f0-c4e7-4064-86db-14424a7c1566, event type delete
I0903 20:43:32.317752       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (2.9µs)
I0903 20:43:32.356021       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (2.101µs)
I0903 20:43:32.356424       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5194, estimate: 0, errors: <nil>
I0903 20:43:32.366197       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5194" (278.466596ms)
... skipping 112 lines ...
I0903 20:43:46.591400       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4", timestamp:time.Time{wall:0xc0bd0d00a1cd6b04, ext:922986892160, loc:(*time.Location)(0x751a1a0)}}
I0903 20:43:46.591467       1 controller_utils.go:938] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4-zkmfn in state Running, deletion time 2022-09-03 20:44:16 +0000 UTC
I0903 20:43:46.591640       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4" (245.403µs)
I0903 20:43:46.594983       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" duration="8.367717ms"
I0903 20:43:46.595045       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" startTime="2022-09-03 20:43:46.595007696 +0000 UTC m=+923.014789452"
I0903 20:43:46.595201       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p"
W0903 20:43:46.599790       1 reconciler.go:385] Multi-Attach error for volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88") from node "capz-obexd2-mp-0000000" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4-zkmfn on node capz-obexd2-mp-0000001
I0903 20:43:46.600002       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-6wh9p-689cc964d4-d2fnk" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-10f3b0be-9f7b-4132-93f6-df457b700a88\" Volume is already used by pod(s) azuredisk-volume-tester-6wh9p-689cc964d4-zkmfn"
I0903 20:43:46.602167       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p"
I0903 20:43:46.602440       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" duration="7.415704ms"
I0903 20:43:46.602606       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" startTime="2022-09-03 20:43:46.602585902 +0000 UTC m=+923.022367558"
I0903 20:43:46.603373       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-6wh9p" for a progress check after 598s
I0903 20:43:46.603412       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6wh9p" duration="814.112µs"
I0903 20:43:46.608471       1 replica_set.go:443] Pod azuredisk-volume-tester-6wh9p-689cc964d4-d2fnk updated, objectMeta {Name:azuredisk-volume-tester-6wh9p-689cc964d4-d2fnk GenerateName:azuredisk-volume-tester-6wh9p-689cc964d4- Namespace:azuredisk-1353 SelfLink: UID:9e831eda-9c33-4e8f-bf9f-4569ff3bc26a ResourceVersion:2518 Generation:0 CreationTimestamp:2022-09-03 20:43:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:689cc964d4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-6wh9p-689cc964d4 UID:06212d26-daf2-4a87-a4aa-60d8ce43f9c0 Controller:0xc0024208de BlockOwnerDeletion:0xc0024208df}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 20:43:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06212d26-daf2-4a87-a4aa-60d8ce43f9c0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-6wh9p-689cc964d4-d2fnk GenerateName:azuredisk-volume-tester-6wh9p-689cc964d4- Namespace:azuredisk-1353 SelfLink: UID:9e831eda-9c33-4e8f-bf9f-4569ff3bc26a ResourceVersion:2524 Generation:0 CreationTimestamp:2022-09-03 20:43:46 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:689cc964d4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-6wh9p-689cc964d4 UID:06212d26-daf2-4a87-a4aa-60d8ce43f9c0 Controller:0xc002421d87 BlockOwnerDeletion:0xc002421d88}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 20:43:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06212d26-daf2-4a87-a4aa-60d8ce43f9c0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 20:43:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
... skipping 418 lines ...
I0903 20:45:25.658113       1 pv_controller.go:1108] reclaimVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: policy is Delete
I0903 20:45:25.658222       1 pv_controller.go:1752] scheduleOperation[delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]]
I0903 20:45:25.658340       1 pv_controller.go:1763] operation "delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]" is already running, skipping
I0903 20:45:25.658132       1 pv_controller.go:1231] deleteVolumeOperation [pvc-10f3b0be-9f7b-4132-93f6-df457b700a88] started
I0903 20:45:25.660368       1 pv_controller.go:1340] isVolumeReleased[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is released
I0903 20:45:25.660385       1 pv_controller.go:1404] doDeleteVolume [pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]
I0903 20:45:25.694697       1 pv_controller.go:1259] deletion of volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:45:25.694719       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: set phase Failed
I0903 20:45:25.694727       1 pv_controller.go:858] updating PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: set phase Failed
I0903 20:45:25.697155       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 on node "capz-obexd2-mp-0000000"
I0903 20:45:25.697531       1 pv_protection_controller.go:205] Got event on PV pvc-10f3b0be-9f7b-4132-93f6-df457b700a88
I0903 20:45:25.697550       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" with version 2701
I0903 20:45:25.698149       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: phase: Failed, bound to: "azuredisk-1353/pvc-rdhpv (uid: 10f3b0be-9f7b-4132-93f6-df457b700a88)", boundByController: true
I0903 20:45:25.698337       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is bound to claim azuredisk-1353/pvc-rdhpv
I0903 20:45:25.698502       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: claim azuredisk-1353/pvc-rdhpv not found
I0903 20:45:25.698668       1 pv_controller.go:1108] reclaimVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: policy is Delete
I0903 20:45:25.698690       1 pv_controller.go:1752] scheduleOperation[delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]]
I0903 20:45:25.698765       1 pv_controller.go:1763] operation "delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]" is already running, skipping
I0903 20:45:25.698618       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" with version 2701
I0903 20:45:25.698928       1 pv_controller.go:879] volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" entered phase "Failed"
I0903 20:45:25.698945       1 pv_controller.go:901] volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:45:25.699170       1 goroutinemap.go:150] Operation for "delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]" failed. No retries permitted until 2022-09-03 20:45:26.199146981 +0000 UTC m=+1022.618928737 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:45:25.699329       1 event.go:291] "Event occurred" object="pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:45:27.177495       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:45:27.177535       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:45:27.215623       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:45:27.215841       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:45:27.218064       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-obexd2-mp-0000000" succeeded. VolumesAttached: []
... skipping 9 lines ...
I0903 20:45:34.433233       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:45:34.439392       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:45:34.474531       1 gc_controller.go:161] GC'ing orphaned
I0903 20:45:34.474589       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:45:34.527987       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:45:34.528052       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" with version 2701
I0903 20:45:34.528090       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: phase: Failed, bound to: "azuredisk-1353/pvc-rdhpv (uid: 10f3b0be-9f7b-4132-93f6-df457b700a88)", boundByController: true
I0903 20:45:34.528122       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is bound to claim azuredisk-1353/pvc-rdhpv
I0903 20:45:34.528138       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: claim azuredisk-1353/pvc-rdhpv not found
I0903 20:45:34.528145       1 pv_controller.go:1108] reclaimVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: policy is Delete
I0903 20:45:34.528161       1 pv_controller.go:1752] scheduleOperation[delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]]
I0903 20:45:34.528188       1 pv_controller.go:1231] deleteVolumeOperation [pvc-10f3b0be-9f7b-4132-93f6-df457b700a88] started
I0903 20:45:34.533627       1 pv_controller.go:1340] isVolumeReleased[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is released
I0903 20:45:34.533645       1 pv_controller.go:1404] doDeleteVolume [pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]
I0903 20:45:34.533681       1 pv_controller.go:1259] deletion of volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) since it's in attaching or detaching state
I0903 20:45:34.533697       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: set phase Failed
I0903 20:45:34.533710       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: phase Failed already set
E0903 20:45:34.533738       1 goroutinemap.go:150] Operation for "delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]" failed. No retries permitted until 2022-09-03 20:45:35.533719232 +0000 UTC m=+1031.953500988 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) since it's in attaching or detaching state
I0903 20:45:35.266586       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:45:38.428387       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:45:42.721448       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) returned with <nil>
I0903 20:45:42.721495       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88) succeeded
I0903 20:45:42.721655       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88 was detached from node:capz-obexd2-mp-0000000
I0903 20:45:42.721687       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88") on node "capz-obexd2-mp-0000000" 
I0903 20:45:43.454882       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="107.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40496" resp=200
I0903 20:45:44.421147       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0903 20:45:47.420536       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 0 items received
I0903 20:45:49.439843       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:45:49.528684       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:45:49.528757       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" with version 2701
I0903 20:45:49.528802       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: phase: Failed, bound to: "azuredisk-1353/pvc-rdhpv (uid: 10f3b0be-9f7b-4132-93f6-df457b700a88)", boundByController: true
I0903 20:45:49.528843       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is bound to claim azuredisk-1353/pvc-rdhpv
I0903 20:45:49.528867       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: claim azuredisk-1353/pvc-rdhpv not found
I0903 20:45:49.528876       1 pv_controller.go:1108] reclaimVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: policy is Delete
I0903 20:45:49.528898       1 pv_controller.go:1752] scheduleOperation[delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]]
I0903 20:45:49.528931       1 pv_controller.go:1231] deleteVolumeOperation [pvc-10f3b0be-9f7b-4132-93f6-df457b700a88] started
I0903 20:45:49.534778       1 pv_controller.go:1340] isVolumeReleased[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is released
... skipping 6 lines ...
I0903 20:45:54.836334       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88
I0903 20:45:54.836370       1 pv_controller.go:1435] volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" deleted
I0903 20:45:54.836384       1 pv_controller.go:1283] deleteVolumeOperation [pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: success
I0903 20:45:54.840939       1 pv_protection_controller.go:205] Got event on PV pvc-10f3b0be-9f7b-4132-93f6-df457b700a88
I0903 20:45:54.840968       1 pv_protection_controller.go:125] Processing PV pvc-10f3b0be-9f7b-4132-93f6-df457b700a88
I0903 20:45:54.841275       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-10f3b0be-9f7b-4132-93f6-df457b700a88" with version 2747
I0903 20:45:54.841308       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: phase: Failed, bound to: "azuredisk-1353/pvc-rdhpv (uid: 10f3b0be-9f7b-4132-93f6-df457b700a88)", boundByController: true
I0903 20:45:54.841333       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: volume is bound to claim azuredisk-1353/pvc-rdhpv
I0903 20:45:54.841352       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: claim azuredisk-1353/pvc-rdhpv not found
I0903 20:45:54.841360       1 pv_controller.go:1108] reclaimVolume[pvc-10f3b0be-9f7b-4132-93f6-df457b700a88]: policy is Delete
I0903 20:45:54.841375       1 pv_controller.go:1752] scheduleOperation[delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]]
I0903 20:45:54.841382       1 pv_controller.go:1763] operation "delete-pvc-10f3b0be-9f7b-4132-93f6-df457b700a88[453934c0-09f0-45a3-8420-a69e66e2c9e6]" is already running, skipping
I0903 20:45:54.851175       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-10f3b0be-9f7b-4132-93f6-df457b700a88
... skipping 560 lines ...
I0903 20:46:19.801486       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" to node "capz-obexd2-mp-0000000".
I0903 20:46:19.801498       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" to node "capz-obexd2-mp-0000000".
I0903 20:46:19.801514       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-obexd2-dynamic-pvc-478fab0d-9c7c-4a3d-8844-6039d298c478. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-478fab0d-9c7c-4a3d-8844-6039d298c478" to node "capz-obexd2-mp-0000000".
I0903 20:46:19.828986       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" lun 0 to node "capz-obexd2-mp-0000000".
I0903 20:46:19.829080       1 azure_controller_vmss.go:101] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - attach disk(capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) with DiskEncryptionSetID()
I0903 20:46:19.864220       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-s92kq, uid 6acd692b-84c4-4742-9d1b-91043c6c7dfb, event type delete
E0903 20:46:19.878965       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-hdlwz" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0903 20:46:19.938454       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0903 20:46:19.938505       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid 4d06e8a7-9776-41d9-9e79-ddfa0df41d0c, event type delete
I0903 20:46:19.938533       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.301µs)
I0903 20:46:19.986435       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (3µs)
I0903 20:46:19.986759       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8266, estimate: 0, errors: <nil>
I0903 20:46:19.994504       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8266" (232.531548ms)
... skipping 309 lines ...
I0903 20:46:55.531451       1 pv_controller.go:1108] reclaimVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: policy is Delete
I0903 20:46:55.531465       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]]
I0903 20:46:55.531472       1 pv_controller.go:1763] operation "delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]" is already running, skipping
I0903 20:46:55.531537       1 pv_controller.go:1231] deleteVolumeOperation [pvc-cddace50-1c99-46a0-a9a2-534753d84d5d] started
I0903 20:46:55.533237       1 pv_controller.go:1340] isVolumeReleased[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: volume is released
I0903 20:46:55.533254       1 pv_controller.go:1404] doDeleteVolume [pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]
I0903 20:46:55.564006       1 pv_controller.go:1259] deletion of volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:46:55.564029       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: set phase Failed
I0903 20:46:55.564037       1 pv_controller.go:858] updating PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: set phase Failed
I0903 20:46:55.567380       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" with version 2981
I0903 20:46:55.567412       1 pv_controller.go:879] volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" entered phase "Failed"
I0903 20:46:55.567549       1 pv_controller.go:901] volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:46:55.567698       1 goroutinemap.go:150] Operation for "delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]" failed. No retries permitted until 2022-09-03 20:46:56.067679905 +0000 UTC m=+1112.487461561 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:46:55.567981       1 event.go:291] "Event occurred" object="pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:46:55.568237       1 pv_protection_controller.go:205] Got event on PV pvc-cddace50-1c99-46a0-a9a2-534753d84d5d
I0903 20:46:55.568394       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" with version 2981
I0903 20:46:55.568530       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: phase: Failed, bound to: "azuredisk-59/pvc-p4n6j (uid: cddace50-1c99-46a0-a9a2-534753d84d5d)", boundByController: true
I0903 20:46:55.568649       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: volume is bound to claim azuredisk-59/pvc-p4n6j
I0903 20:46:55.568799       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: claim azuredisk-59/pvc-p4n6j not found
I0903 20:46:55.568956       1 pv_controller.go:1108] reclaimVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: policy is Delete
I0903 20:46:55.569089       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]]
I0903 20:46:55.569289       1 pv_controller.go:1765] operation "delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]" postponed due to exponential backoff
I0903 20:46:57.289322       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
... skipping 39 lines ...
I0903 20:47:04.532251       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: volume is bound to claim azuredisk-59/pvc-mxnzl
I0903 20:47:04.532278       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: claim azuredisk-59/pvc-mxnzl found: phase: Bound, bound to: "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c", bindCompleted: true, boundByController: true
I0903 20:47:04.532341       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: all is bound
I0903 20:47:04.532355       1 pv_controller.go:858] updating PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: set phase Bound
I0903 20:47:04.532366       1 pv_controller.go:861] updating PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: phase Bound already set
I0903 20:47:04.532412       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" with version 2981
I0903 20:47:04.532451       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: phase: Failed, bound to: "azuredisk-59/pvc-p4n6j (uid: cddace50-1c99-46a0-a9a2-534753d84d5d)", boundByController: true
I0903 20:47:04.532504       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: volume is bound to claim azuredisk-59/pvc-p4n6j
I0903 20:47:04.532542       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: claim azuredisk-59/pvc-p4n6j not found
I0903 20:47:04.532551       1 pv_controller.go:1108] reclaimVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: policy is Delete
I0903 20:47:04.532611       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]]
I0903 20:47:04.532629       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-478fab0d-9c7c-4a3d-8844-6039d298c478" with version 2880
I0903 20:47:04.532681       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-478fab0d-9c7c-4a3d-8844-6039d298c478]: phase: Bound, bound to: "azuredisk-59/pvc-ztr4p (uid: 478fab0d-9c7c-4a3d-8844-6039d298c478)", boundByController: true
... skipping 34 lines ...
I0903 20:47:04.535715       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-ztr4p] status: phase Bound already set
I0903 20:47:04.536681       1 pv_controller.go:1038] volume "pvc-478fab0d-9c7c-4a3d-8844-6039d298c478" bound to claim "azuredisk-59/pvc-ztr4p"
I0903 20:47:04.536852       1 pv_controller.go:1039] volume "pvc-478fab0d-9c7c-4a3d-8844-6039d298c478" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-ztr4p (uid: 478fab0d-9c7c-4a3d-8844-6039d298c478)", boundByController: true
I0903 20:47:04.536969       1 pv_controller.go:1040] claim "azuredisk-59/pvc-ztr4p" status after binding: phase: Bound, bound to: "pvc-478fab0d-9c7c-4a3d-8844-6039d298c478", bindCompleted: true, boundByController: true
I0903 20:47:04.537462       1 pv_controller.go:1340] isVolumeReleased[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: volume is released
I0903 20:47:04.537479       1 pv_controller.go:1404] doDeleteVolume [pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]
I0903 20:47:04.537511       1 pv_controller.go:1259] deletion of volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) since it's in attaching or detaching state
I0903 20:47:04.537627       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: set phase Failed
I0903 20:47:04.537637       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: phase Failed already set
E0903 20:47:04.537665       1 goroutinemap.go:150] Operation for "delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]" failed. No retries permitted until 2022-09-03 20:47:05.537647436 +0000 UTC m=+1121.957429192 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) since it's in attaching or detaching state
I0903 20:47:05.319051       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:47:10.421206       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 13 items received
I0903 20:47:13.454352       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="159.502µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59332" resp=200
I0903 20:47:14.014445       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) returned with <nil>
I0903 20:47:14.014541       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d) succeeded
I0903 20:47:14.014706       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d was detached from node:capz-obexd2-mp-0000000
... skipping 18 lines ...
I0903 20:47:19.534482       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" with version 2981
I0903 20:47:19.534059       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-59/pvc-mxnzl]: phase: Bound, bound to: "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c", bindCompleted: true, boundByController: true
I0903 20:47:19.534737       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-mxnzl]: volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" found: phase: Bound, bound to: "azuredisk-59/pvc-mxnzl (uid: d904d5dc-dae4-4279-b3b2-1f31a5ea445c)", boundByController: true
I0903 20:47:19.534754       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-mxnzl]: claim is already correctly bound
I0903 20:47:19.534766       1 pv_controller.go:1012] binding volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" to claim "azuredisk-59/pvc-mxnzl"
I0903 20:47:19.534813       1 pv_controller.go:910] updating PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: binding to "azuredisk-59/pvc-mxnzl"
I0903 20:47:19.534598       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: phase: Failed, bound to: "azuredisk-59/pvc-p4n6j (uid: cddace50-1c99-46a0-a9a2-534753d84d5d)", boundByController: true
I0903 20:47:19.534956       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: volume is bound to claim azuredisk-59/pvc-p4n6j
I0903 20:47:19.535004       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: claim azuredisk-59/pvc-p4n6j not found
I0903 20:47:19.535020       1 pv_controller.go:1108] reclaimVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: policy is Delete
I0903 20:47:19.535038       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]]
I0903 20:47:19.535060       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-478fab0d-9c7c-4a3d-8844-6039d298c478" with version 2880
I0903 20:47:19.535111       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-478fab0d-9c7c-4a3d-8844-6039d298c478]: phase: Bound, bound to: "azuredisk-59/pvc-ztr4p (uid: 478fab0d-9c7c-4a3d-8844-6039d298c478)", boundByController: true
... skipping 36 lines ...
I0903 20:47:26.168261       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d
I0903 20:47:26.168295       1 pv_controller.go:1435] volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" deleted
I0903 20:47:26.168309       1 pv_controller.go:1283] deleteVolumeOperation [pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: success
I0903 20:47:26.176453       1 pv_protection_controller.go:205] Got event on PV pvc-cddace50-1c99-46a0-a9a2-534753d84d5d
I0903 20:47:26.176484       1 pv_protection_controller.go:125] Processing PV pvc-cddace50-1c99-46a0-a9a2-534753d84d5d
I0903 20:47:26.176520       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" with version 3029
I0903 20:47:26.176546       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: phase: Failed, bound to: "azuredisk-59/pvc-p4n6j (uid: cddace50-1c99-46a0-a9a2-534753d84d5d)", boundByController: true
I0903 20:47:26.176577       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: volume is bound to claim azuredisk-59/pvc-p4n6j
I0903 20:47:26.176594       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: claim azuredisk-59/pvc-p4n6j not found
I0903 20:47:26.176602       1 pv_controller.go:1108] reclaimVolume[pvc-cddace50-1c99-46a0-a9a2-534753d84d5d]: policy is Delete
I0903 20:47:26.176615       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cddace50-1c99-46a0-a9a2-534753d84d5d[5ed85dbf-0fd0-4bdf-a695-634647f006e4]]
I0903 20:47:26.176634       1 pv_controller.go:1231] deleteVolumeOperation [pvc-cddace50-1c99-46a0-a9a2-534753d84d5d] started
I0903 20:47:26.180682       1 pv_controller.go:1243] Volume "pvc-cddace50-1c99-46a0-a9a2-534753d84d5d" is already being deleted
... skipping 153 lines ...
I0903 20:47:41.250391       1 pv_controller.go:1108] reclaimVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: policy is Delete
I0903 20:47:41.250402       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]]
I0903 20:47:41.250408       1 pv_controller.go:1763] operation "delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]" is already running, skipping
I0903 20:47:41.250432       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c] started
I0903 20:47:41.252819       1 pv_controller.go:1340] isVolumeReleased[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: volume is released
I0903 20:47:41.252836       1 pv_controller.go:1404] doDeleteVolume [pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]
I0903 20:47:41.252866       1 pv_controller.go:1259] deletion of volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c) since it's in attaching or detaching state
I0903 20:47:41.252880       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: set phase Failed
I0903 20:47:41.252888       1 pv_controller.go:858] updating PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: set phase Failed
I0903 20:47:41.255329       1 pv_protection_controller.go:205] Got event on PV pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c
I0903 20:47:41.255574       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" with version 3063
I0903 20:47:41.255795       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: phase: Failed, bound to: "azuredisk-59/pvc-mxnzl (uid: d904d5dc-dae4-4279-b3b2-1f31a5ea445c)", boundByController: true
I0903 20:47:41.255821       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: volume is bound to claim azuredisk-59/pvc-mxnzl
I0903 20:47:41.255840       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: claim azuredisk-59/pvc-mxnzl not found
I0903 20:47:41.255898       1 pv_controller.go:1108] reclaimVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: policy is Delete
I0903 20:47:41.255940       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]]
I0903 20:47:41.255947       1 pv_controller.go:1763] operation "delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]" is already running, skipping
I0903 20:47:41.256418       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" with version 3063
I0903 20:47:41.256447       1 pv_controller.go:879] volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" entered phase "Failed"
I0903 20:47:41.256456       1 pv_controller.go:901] volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c) since it's in attaching or detaching state
E0903 20:47:41.256513       1 goroutinemap.go:150] Operation for "delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]" failed. No retries permitted until 2022-09-03 20:47:41.756474448 +0000 UTC m=+1158.176256104 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c) since it's in attaching or detaching state
I0903 20:47:41.256760       1 event.go:291] "Event occurred" object="pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c) since it's in attaching or detaching state"
I0903 20:47:41.357347       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 21 items received
I0903 20:47:43.456072       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="67.001µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56202" resp=200
I0903 20:47:45.992270       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 6 items received
I0903 20:47:46.083995       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c) returned with <nil>
I0903 20:47:46.084138       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c) succeeded
I0903 20:47:46.084200       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c was detached from node:capz-obexd2-mp-0000000
I0903 20:47:46.084272       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c") on node "capz-obexd2-mp-0000000" 
I0903 20:47:47.651245       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0903 20:47:49.444635       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:47:49.535094       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:47:49.535389       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" with version 3063
I0903 20:47:49.535590       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: phase: Failed, bound to: "azuredisk-59/pvc-mxnzl (uid: d904d5dc-dae4-4279-b3b2-1f31a5ea445c)", boundByController: true
I0903 20:47:49.535764       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: volume is bound to claim azuredisk-59/pvc-mxnzl
I0903 20:47:49.535934       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: claim azuredisk-59/pvc-mxnzl not found
I0903 20:47:49.536111       1 pv_controller.go:1108] reclaimVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: policy is Delete
I0903 20:47:49.536256       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]]
I0903 20:47:49.536451       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c] started
I0903 20:47:49.543308       1 pv_controller.go:1340] isVolumeReleased[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: volume is released
... skipping 4 lines ...
I0903 20:47:55.259413       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c
I0903 20:47:55.259449       1 pv_controller.go:1435] volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" deleted
I0903 20:47:55.259462       1 pv_controller.go:1283] deleteVolumeOperation [pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: success
I0903 20:47:55.265834       1 pv_protection_controller.go:205] Got event on PV pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c
I0903 20:47:55.266153       1 pv_protection_controller.go:125] Processing PV pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c
I0903 20:47:55.266719       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" with version 3085
I0903 20:47:55.267002       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: phase: Failed, bound to: "azuredisk-59/pvc-mxnzl (uid: d904d5dc-dae4-4279-b3b2-1f31a5ea445c)", boundByController: true
I0903 20:47:55.267158       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: volume is bound to claim azuredisk-59/pvc-mxnzl
I0903 20:47:55.267430       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: claim azuredisk-59/pvc-mxnzl not found
I0903 20:47:55.267603       1 pv_controller.go:1108] reclaimVolume[pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c]: policy is Delete
I0903 20:47:55.267760       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c[0474c74a-acaf-44d2-a9a1-664dc3fe02b8]]
I0903 20:47:55.267950       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c] started
I0903 20:47:55.271964       1 pv_controller.go:1243] Volume "pvc-d904d5dc-dae4-4279-b3b2-1f31a5ea445c" is already being deleted
... skipping 438 lines ...
I0903 20:48:27.981507       1 pv_controller.go:1108] reclaimVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: policy is Delete
I0903 20:48:27.981574       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8c6e2523-50a7-4313-85af-e71838ed730b] started
I0903 20:48:27.981588       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]]
I0903 20:48:27.981933       1 pv_controller.go:1763] operation "delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]" is already running, skipping
I0903 20:48:27.983171       1 pv_controller.go:1340] isVolumeReleased[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is released
I0903 20:48:27.983311       1 pv_controller.go:1404] doDeleteVolume [pvc-8c6e2523-50a7-4313-85af-e71838ed730b]
I0903 20:48:28.017006       1 pv_controller.go:1259] deletion of volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:48:28.017032       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: set phase Failed
I0903 20:48:28.017040       1 pv_controller.go:858] updating PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: set phase Failed
I0903 20:48:28.020005       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" with version 3207
I0903 20:48:28.020577       1 pv_controller.go:879] volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" entered phase "Failed"
I0903 20:48:28.020757       1 pv_controller.go:901] volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:48:28.020961       1 goroutinemap.go:150] Operation for "delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]" failed. No retries permitted until 2022-09-03 20:48:28.520939805 +0000 UTC m=+1204.940721561 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:48:28.020513       1 pv_protection_controller.go:205] Got event on PV pvc-8c6e2523-50a7-4313-85af-e71838ed730b
I0903 20:48:28.020530       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" with version 3207
I0903 20:48:28.021582       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase: Failed, bound to: "azuredisk-2546/pvc-w2kml (uid: 8c6e2523-50a7-4313-85af-e71838ed730b)", boundByController: true
I0903 20:48:28.021741       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is bound to claim azuredisk-2546/pvc-w2kml
I0903 20:48:28.021887       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: claim azuredisk-2546/pvc-w2kml not found
I0903 20:48:28.022011       1 pv_controller.go:1108] reclaimVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: policy is Delete
I0903 20:48:28.022133       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]]
I0903 20:48:28.021163       1 event.go:291] "Event occurred" object="pvc-8c6e2523-50a7-4313-85af-e71838ed730b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
I0903 20:48:28.022352       1 pv_controller.go:1765] operation "delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]" postponed due to exponential backoff
... skipping 17 lines ...
I0903 20:48:34.538239       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn]: claim is already correctly bound
I0903 20:48:34.538250       1 pv_controller.go:1012] binding volume "pvc-23b62476-57c7-431f-93bc-de2790d65695" to claim "azuredisk-2546/pvc-vlbtn"
I0903 20:48:34.538260       1 pv_controller.go:910] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: binding to "azuredisk-2546/pvc-vlbtn"
I0903 20:48:34.538173       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" with version 3207
I0903 20:48:34.538282       1 pv_controller.go:922] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: already bound to "azuredisk-2546/pvc-vlbtn"
I0903 20:48:34.538291       1 pv_controller.go:858] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: set phase Bound
I0903 20:48:34.538299       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase: Failed, bound to: "azuredisk-2546/pvc-w2kml (uid: 8c6e2523-50a7-4313-85af-e71838ed730b)", boundByController: true
I0903 20:48:34.538301       1 pv_controller.go:861] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: phase Bound already set
I0903 20:48:34.538311       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn]: binding to "pvc-23b62476-57c7-431f-93bc-de2790d65695"
I0903 20:48:34.538323       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is bound to claim azuredisk-2546/pvc-w2kml
I0903 20:48:34.538331       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn]: already bound to "pvc-23b62476-57c7-431f-93bc-de2790d65695"
I0903 20:48:34.538341       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn] status: set phase Bound
I0903 20:48:34.538343       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: claim azuredisk-2546/pvc-w2kml not found
... skipping 10 lines ...
I0903 20:48:34.538477       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: all is bound
I0903 20:48:34.538484       1 pv_controller.go:858] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: set phase Bound
I0903 20:48:34.538493       1 pv_controller.go:861] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: phase Bound already set
I0903 20:48:34.538516       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8c6e2523-50a7-4313-85af-e71838ed730b] started
I0903 20:48:34.540905       1 pv_controller.go:1340] isVolumeReleased[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is released
I0903 20:48:34.540926       1 pv_controller.go:1404] doDeleteVolume [pvc-8c6e2523-50a7-4313-85af-e71838ed730b]
I0903 20:48:34.579099       1 pv_controller.go:1259] deletion of volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:48:34.579124       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: set phase Failed
I0903 20:48:34.579135       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase Failed already set
E0903 20:48:34.579164       1 goroutinemap.go:150] Operation for "delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]" failed. No retries permitted until 2022-09-03 20:48:35.579144337 +0000 UTC m=+1211.998925993 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:48:34.679685       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0903 20:48:35.362318       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:48:37.428220       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:48:37.428580       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-23b62476-57c7-431f-93bc-de2790d65695 to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:48:37.429054       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:48:37.451346       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
... skipping 25 lines ...
I0903 20:48:49.539932       1 pv_controller.go:922] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: already bound to "azuredisk-2546/pvc-vlbtn"
I0903 20:48:49.539945       1 pv_controller.go:858] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: set phase Bound
I0903 20:48:49.539955       1 pv_controller.go:861] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: phase Bound already set
I0903 20:48:49.539964       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn]: binding to "pvc-23b62476-57c7-431f-93bc-de2790d65695"
I0903 20:48:49.539985       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn]: already bound to "pvc-23b62476-57c7-431f-93bc-de2790d65695"
I0903 20:48:49.540063       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn] status: set phase Bound
I0903 20:48:49.539367       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase: Failed, bound to: "azuredisk-2546/pvc-w2kml (uid: 8c6e2523-50a7-4313-85af-e71838ed730b)", boundByController: true
I0903 20:48:49.540278       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is bound to claim azuredisk-2546/pvc-w2kml
I0903 20:48:49.540428       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: claim azuredisk-2546/pvc-w2kml not found
I0903 20:48:49.540559       1 pv_controller.go:1108] reclaimVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: policy is Delete
I0903 20:48:49.540628       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]]
I0903 20:48:49.540654       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-23b62476-57c7-431f-93bc-de2790d65695" with version 3118
I0903 20:48:49.540137       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn] status: phase Bound already set
... skipping 6 lines ...
I0903 20:48:49.540746       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: claim azuredisk-2546/pvc-vlbtn found: phase: Bound, bound to: "pvc-23b62476-57c7-431f-93bc-de2790d65695", bindCompleted: true, boundByController: true
I0903 20:48:49.540758       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: all is bound
I0903 20:48:49.540763       1 pv_controller.go:858] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: set phase Bound
I0903 20:48:49.540771       1 pv_controller.go:861] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: phase Bound already set
I0903 20:48:49.546060       1 pv_controller.go:1340] isVolumeReleased[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is released
I0903 20:48:49.546075       1 pv_controller.go:1404] doDeleteVolume [pvc-8c6e2523-50a7-4313-85af-e71838ed730b]
I0903 20:48:49.546107       1 pv_controller.go:1259] deletion of volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) since it's in attaching or detaching state
I0903 20:48:49.546118       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: set phase Failed
I0903 20:48:49.546129       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase Failed already set
E0903 20:48:49.546157       1 goroutinemap.go:150] Operation for "delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]" failed. No retries permitted until 2022-09-03 20:48:51.546138713 +0000 UTC m=+1227.965920469 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) since it's in attaching or detaching state
I0903 20:48:53.436940       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 16 items received
I0903 20:48:53.453909       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="56.9µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59236" resp=200
I0903 20:48:54.480686       1 gc_controller.go:161] GC'ing orphaned
I0903 20:48:54.480718       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:48:56.601801       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0903 20:48:59.463903       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
... skipping 8 lines ...
I0903 20:49:04.539689       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: volume is bound to claim azuredisk-2546/pvc-vlbtn
I0903 20:49:04.539710       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: claim azuredisk-2546/pvc-vlbtn found: phase: Bound, bound to: "pvc-23b62476-57c7-431f-93bc-de2790d65695", bindCompleted: true, boundByController: true
I0903 20:49:04.539728       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: all is bound
I0903 20:49:04.539740       1 pv_controller.go:858] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: set phase Bound
I0903 20:49:04.539750       1 pv_controller.go:861] updating PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: phase Bound already set
I0903 20:49:04.539769       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" with version 3207
I0903 20:49:04.539793       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase: Failed, bound to: "azuredisk-2546/pvc-w2kml (uid: 8c6e2523-50a7-4313-85af-e71838ed730b)", boundByController: true
I0903 20:49:04.539820       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is bound to claim azuredisk-2546/pvc-w2kml
I0903 20:49:04.539842       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: claim azuredisk-2546/pvc-w2kml not found
I0903 20:49:04.539853       1 pv_controller.go:1108] reclaimVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: policy is Delete
I0903 20:49:04.539868       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]]
I0903 20:49:04.539897       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8c6e2523-50a7-4313-85af-e71838ed730b] started
I0903 20:49:04.540077       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-vlbtn" with version 3120
... skipping 11 lines ...
I0903 20:49:04.540232       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-vlbtn] status: phase Bound already set
I0903 20:49:04.540241       1 pv_controller.go:1038] volume "pvc-23b62476-57c7-431f-93bc-de2790d65695" bound to claim "azuredisk-2546/pvc-vlbtn"
I0903 20:49:04.540255       1 pv_controller.go:1039] volume "pvc-23b62476-57c7-431f-93bc-de2790d65695" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-vlbtn (uid: 23b62476-57c7-431f-93bc-de2790d65695)", boundByController: true
I0903 20:49:04.540268       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-vlbtn" status after binding: phase: Bound, bound to: "pvc-23b62476-57c7-431f-93bc-de2790d65695", bindCompleted: true, boundByController: true
I0903 20:49:04.543223       1 pv_controller.go:1340] isVolumeReleased[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is released
I0903 20:49:04.543242       1 pv_controller.go:1404] doDeleteVolume [pvc-8c6e2523-50a7-4313-85af-e71838ed730b]
I0903 20:49:04.543272       1 pv_controller.go:1259] deletion of volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) since it's in attaching or detaching state
I0903 20:49:04.543280       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: set phase Failed
I0903 20:49:04.543287       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase Failed already set
E0903 20:49:04.543306       1 goroutinemap.go:150] Operation for "delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]" failed. No retries permitted until 2022-09-03 20:49:08.543292751 +0000 UTC m=+1244.963074507 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) since it's in attaching or detaching state
I0903 20:49:04.648509       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) returned with <nil>
I0903 20:49:04.648557       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b) succeeded
I0903 20:49:04.648566       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b was detached from node:capz-obexd2-mp-0000000
I0903 20:49:04.648582       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b") on node "capz-obexd2-mp-0000000" 
I0903 20:49:05.377152       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:49:13.454021       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="68.401µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45802" resp=200
I0903 20:49:14.481181       1 gc_controller.go:161] GC'ing orphaned
I0903 20:49:14.481236       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:49:19.448687       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:49:19.540317       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:49:19.540451       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" with version 3207
I0903 20:49:19.540535       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase: Failed, bound to: "azuredisk-2546/pvc-w2kml (uid: 8c6e2523-50a7-4313-85af-e71838ed730b)", boundByController: true
I0903 20:49:19.540618       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is bound to claim azuredisk-2546/pvc-w2kml
I0903 20:49:19.540674       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: claim azuredisk-2546/pvc-w2kml not found
I0903 20:49:19.540689       1 pv_controller.go:1108] reclaimVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: policy is Delete
I0903 20:49:19.540705       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]]
I0903 20:49:19.540725       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-23b62476-57c7-431f-93bc-de2790d65695" with version 3118
I0903 20:49:19.540749       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-23b62476-57c7-431f-93bc-de2790d65695]: phase: Bound, bound to: "azuredisk-2546/pvc-vlbtn (uid: 23b62476-57c7-431f-93bc-de2790d65695)", boundByController: true
... skipping 28 lines ...
I0903 20:49:25.857941       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-8c6e2523-50a7-4313-85af-e71838ed730b
I0903 20:49:25.857987       1 pv_controller.go:1435] volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" deleted
I0903 20:49:25.858015       1 pv_controller.go:1283] deleteVolumeOperation [pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: success
I0903 20:49:25.876779       1 pv_protection_controller.go:205] Got event on PV pvc-8c6e2523-50a7-4313-85af-e71838ed730b
I0903 20:49:25.876811       1 pv_protection_controller.go:125] Processing PV pvc-8c6e2523-50a7-4313-85af-e71838ed730b
I0903 20:49:25.877143       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" with version 3296
I0903 20:49:25.877195       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: phase: Failed, bound to: "azuredisk-2546/pvc-w2kml (uid: 8c6e2523-50a7-4313-85af-e71838ed730b)", boundByController: true
I0903 20:49:25.877235       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: volume is bound to claim azuredisk-2546/pvc-w2kml
I0903 20:49:25.877254       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: claim azuredisk-2546/pvc-w2kml not found
I0903 20:49:25.877263       1 pv_controller.go:1108] reclaimVolume[pvc-8c6e2523-50a7-4313-85af-e71838ed730b]: policy is Delete
I0903 20:49:25.877285       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8c6e2523-50a7-4313-85af-e71838ed730b[dcba4d05-2e4d-40bd-a99f-3a2b83571668]]
I0903 20:49:25.877314       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8c6e2523-50a7-4313-85af-e71838ed730b] started
I0903 20:49:25.882278       1 pv_controller.go:1243] Volume "pvc-8c6e2523-50a7-4313-85af-e71838ed730b" is already being deleted
... skipping 361 lines ...
I0903 20:49:44.249092       1 pv_controller.go:1039] volume "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-dwqs6 (uid: f7ebcefa-e28a-4bae-9f17-f20623f88a6e)", boundByController: true
I0903 20:49:44.249135       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-dwqs6" status after binding: phase: Bound, bound to: "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e", bindCompleted: true, boundByController: true
I0903 20:49:44.492439       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1598
I0903 20:49:44.528509       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1598, name kube-root-ca.crt, uid 2447d008-dd39-4507-b00b-712a39ed05a8, event type delete
I0903 20:49:44.529969       1 publisher.go:186] Finished syncing namespace "azuredisk-1598" (1.41022ms)
I0903 20:49:44.541290       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1598, name default-token-b2jcj, uid 260504f9-a542-4fbb-8cff-622ff9807fea, event type delete
E0903 20:49:44.561730       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1598/default: secrets "default-token-4wl8t" is forbidden: unable to create new content in namespace azuredisk-1598 because it is being terminated
I0903 20:49:44.626253       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1598/default), service account deleted, removing tokens
I0903 20:49:44.626379       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1598, name default, uid 4e6d77e8-b8b1-4bc6-92b6-0d5cecb25be1, event type delete
I0903 20:49:44.626415       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (2.2µs)
I0903 20:49:44.651460       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (2.2µs)
I0903 20:49:44.651922       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1598, estimate: 0, errors: <nil>
I0903 20:49:44.659423       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1598" (177.979806ms)
... skipping 13 lines ...
I0903 20:49:45.084622       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" to node "capz-obexd2-mp-0000000".
I0903 20:49:45.085420       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" to node "capz-obexd2-mp-0000000".
I0903 20:49:45.123786       1 node_lifecycle_controller.go:1047] Node capz-obexd2-control-plane-xp4c2 ReadyCondition updated. Updating timestamp.
I0903 20:49:45.138042       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e" lun 0 to node "capz-obexd2-mp-0000000".
I0903 20:49:45.138249       1 azure_controller_vmss.go:101] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - attach disk(capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e) with DiskEncryptionSetID()
I0903 20:49:45.169271       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-2lsp2, uid 3ea020f4-e8c0-4f77-9f92-a4552cf22ddf, event type delete
E0903 20:49:45.181722       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-lwgsg" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0903 20:49:45.198073       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0903 20:49:45.198289       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid 5fbfc15a-2bc1-4206-a731-6ac7894bd85d, event type delete
I0903 20:49:45.198488       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (1.8µs)
I0903 20:49:45.212947       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid f74afca8-0a11-460c-86bc-4e8b274b4451, event type delete
I0903 20:49:45.217017       1 publisher.go:186] Finished syncing namespace "azuredisk-3410" (4.024956ms)
I0903 20:49:45.233506       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3410, estimate: 0, errors: <nil>
... skipping 361 lines ...
I0903 20:50:29.978625       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:50:29.978635       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:50:29.978644       1 pv_controller.go:1763] operation "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" is already running, skipping
I0903 20:50:29.978717       1 pv_controller.go:1231] deleteVolumeOperation [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4] started
I0903 20:50:29.980195       1 pv_controller.go:1340] isVolumeReleased[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is released
I0903 20:50:29.980330       1 pv_controller.go:1404] doDeleteVolume [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]
I0903 20:50:30.024563       1 pv_controller.go:1259] deletion of volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:50:30.024586       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: set phase Failed
I0903 20:50:30.024599       1 pv_controller.go:858] updating PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: set phase Failed
I0903 20:50:30.028290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:50:30.028317       1 pv_controller.go:879] volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" entered phase "Failed"
I0903 20:50:30.028626       1 pv_protection_controller.go:205] Got event on PV pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4
I0903 20:50:30.028681       1 pv_controller.go:901] volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
E0903 20:50:30.028912       1 goroutinemap.go:150] Operation for "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" failed. No retries permitted until 2022-09-03 20:50:30.528841208 +0000 UTC m=+1326.948622964 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:50:30.028715       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:50:30.029128       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:50:30.029307       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:50:30.029426       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:50:30.029533       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:50:30.029629       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:50:30.030097       1 pv_controller.go:1765] operation "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" postponed due to exponential backoff
I0903 20:50:30.029257       1 event.go:291] "Event occurred" object="pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted"
... skipping 9 lines ...
I0903 20:50:34.543812       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is bound to claim azuredisk-8582/pvc-tgthq
I0903 20:50:34.543832       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq found: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:50:34.543847       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: all is bound
I0903 20:50:34.543866       1 pv_controller.go:858] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Bound
I0903 20:50:34.543928       1 pv_controller.go:861] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase Bound already set
I0903 20:50:34.543985       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:50:34.544028       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:50:34.544054       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:50:34.544074       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:50:34.544077       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-dwqs6" with version 3396
I0903 20:50:34.544096       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8582/pvc-dwqs6]: phase: Bound, bound to: "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e", bindCompleted: true, boundByController: true
I0903 20:50:34.544103       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:50:34.544120       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
... skipping 34 lines ...
I0903 20:50:34.544473       1 pv_controller.go:1039] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-tgthq (uid: 4a496a18-1860-4633-afcf-e9d4bd545f0b)", boundByController: true
I0903 20:50:34.544490       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-tgthq" status after binding: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:50:34.544222       1 pv_controller.go:858] updating PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: set phase Bound
I0903 20:50:34.544505       1 pv_controller.go:861] updating PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: phase Bound already set
I0903 20:50:34.553470       1 pv_controller.go:1340] isVolumeReleased[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is released
I0903 20:50:34.553487       1 pv_controller.go:1404] doDeleteVolume [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]
I0903 20:50:34.587113       1 pv_controller.go:1259] deletion of volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:50:34.587138       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: set phase Failed
I0903 20:50:34.587147       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase Failed already set
E0903 20:50:34.587201       1 goroutinemap.go:150] Operation for "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" failed. No retries permitted until 2022-09-03 20:50:35.587155947 +0000 UTC m=+1332.006937603 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:50:35.414540       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:50:37.585556       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-obexd2-mp-0000000"
I0903 20:50:37.585896       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:50:37.586082       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:50:37.586257       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b to the node "capz-obexd2-mp-0000000" mounted false
I0903 20:50:37.605590       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4\"},{\"devicePath\":\"2\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b\"}]}}" for node "capz-obexd2-mp-0000000" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4 1} {kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b 2}]
... skipping 65 lines ...
I0903 20:50:49.545419       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-tgthq" status after binding: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:50:49.545443       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq found: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:50:49.545458       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: all is bound
I0903 20:50:49.545471       1 pv_controller.go:858] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Bound
I0903 20:50:49.545481       1 pv_controller.go:861] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase Bound already set
I0903 20:50:49.545497       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:50:49.545516       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:50:49.545536       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:50:49.545556       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:50:49.545563       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:50:49.545578       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:50:49.545611       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e" with version 3394
I0903 20:50:49.545628       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: phase: Bound, bound to: "azuredisk-8582/pvc-dwqs6 (uid: f7ebcefa-e28a-4bae-9f17-f20623f88a6e)", boundByController: true
... skipping 2 lines ...
I0903 20:50:49.545947       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: claim azuredisk-8582/pvc-dwqs6 found: phase: Bound, bound to: "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e", bindCompleted: true, boundByController: true
I0903 20:50:49.545965       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: all is bound
I0903 20:50:49.545973       1 pv_controller.go:858] updating PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: set phase Bound
I0903 20:50:49.545981       1 pv_controller.go:861] updating PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: phase Bound already set
I0903 20:50:49.548665       1 pv_controller.go:1340] isVolumeReleased[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is released
I0903 20:50:49.548682       1 pv_controller.go:1404] doDeleteVolume [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]
I0903 20:50:49.596697       1 pv_controller.go:1259] deletion of volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:50:49.596725       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: set phase Failed
I0903 20:50:49.596736       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase Failed already set
E0903 20:50:49.596794       1 goroutinemap.go:150] Operation for "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" failed. No retries permitted until 2022-09-03 20:50:51.596744712 +0000 UTC m=+1348.016526368 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/virtualMachineScaleSets/capz-obexd2-mp-0/virtualMachines/capz-obexd2-mp-0_0), could not be deleted
I0903 20:50:52.869491       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e) returned with <nil>
I0903 20:50:52.869569       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e) succeeded
I0903 20:50:52.869580       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e was detached from node:capz-obexd2-mp-0000000
I0903 20:50:52.869603       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e") on node "capz-obexd2-mp-0000000" 
I0903 20:50:52.869641       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-obexd2-mp-0000000, refreshing the cache
I0903 20:50:52.940894       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4"
... skipping 37 lines ...
I0903 20:51:04.546288       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-tgthq] status: set phase Bound
I0903 20:51:04.546348       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-tgthq] status: phase Bound already set
I0903 20:51:04.546399       1 pv_controller.go:1038] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" bound to claim "azuredisk-8582/pvc-tgthq"
I0903 20:51:04.546419       1 pv_controller.go:1039] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-tgthq (uid: 4a496a18-1860-4633-afcf-e9d4bd545f0b)", boundByController: true
I0903 20:51:04.546438       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-tgthq" status after binding: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:51:04.546504       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:51:04.546549       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:51:04.546579       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:51:04.546601       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:51:04.546610       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:51:04.546676       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:51:04.546716       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e" with version 3394
I0903 20:51:04.546801       1 pv_controller.go:1231] deleteVolumeOperation [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4] started
... skipping 9 lines ...
I0903 20:51:04.547819       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq found: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:51:04.547946       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: all is bound
I0903 20:51:04.548065       1 pv_controller.go:858] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Bound
I0903 20:51:04.548194       1 pv_controller.go:861] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase Bound already set
I0903 20:51:04.553983       1 pv_controller.go:1340] isVolumeReleased[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is released
I0903 20:51:04.554005       1 pv_controller.go:1404] doDeleteVolume [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]
I0903 20:51:04.554041       1 pv_controller.go:1259] deletion of volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) since it's in attaching or detaching state
I0903 20:51:04.554063       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: set phase Failed
I0903 20:51:04.554072       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase Failed already set
E0903 20:51:04.554139       1 goroutinemap.go:150] Operation for "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" failed. No retries permitted until 2022-09-03 20:51:08.55408114 +0000 UTC m=+1364.973862896 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) since it's in attaching or detaching state
I0903 20:51:05.438007       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0903 20:51:08.518497       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0903 20:51:10.428704       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 9 items received
I0903 20:51:13.454753       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="78.202µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41098" resp=200
I0903 20:51:14.485548       1 gc_controller.go:161] GC'ing orphaned
I0903 20:51:14.485578       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 8 lines ...
I0903 20:51:19.547461       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is bound to claim azuredisk-8582/pvc-tgthq
I0903 20:51:19.547587       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq found: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:51:19.547607       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: all is bound
I0903 20:51:19.547616       1 pv_controller.go:858] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Bound
I0903 20:51:19.547626       1 pv_controller.go:861] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase Bound already set
I0903 20:51:19.547730       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:51:19.547801       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:51:19.547961       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:51:19.548028       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:51:19.548092       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:51:19.548197       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:51:19.548221       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e" with version 3394
I0903 20:51:19.548243       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: phase: Bound, bound to: "azuredisk-8582/pvc-dwqs6 (uid: f7ebcefa-e28a-4bae-9f17-f20623f88a6e)", boundByController: true
... skipping 32 lines ...
I0903 20:51:19.551936       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-tgthq] status: phase Bound already set
I0903 20:51:19.552044       1 pv_controller.go:1038] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" bound to claim "azuredisk-8582/pvc-tgthq"
I0903 20:51:19.552151       1 pv_controller.go:1039] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-tgthq (uid: 4a496a18-1860-4633-afcf-e9d4bd545f0b)", boundByController: true
I0903 20:51:19.552271       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-tgthq" status after binding: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:51:19.556558       1 pv_controller.go:1340] isVolumeReleased[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is released
I0903 20:51:19.556575       1 pv_controller.go:1404] doDeleteVolume [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]
I0903 20:51:19.556627       1 pv_controller.go:1259] deletion of volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) since it's in attaching or detaching state
I0903 20:51:19.556644       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: set phase Failed
I0903 20:51:19.556653       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase Failed already set
E0903 20:51:19.556700       1 goroutinemap.go:150] Operation for "delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]" failed. No retries permitted until 2022-09-03 20:51:27.556662498 +0000 UTC m=+1383.976444154 (durationBeforeRetry 8s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4) since it's in attaching or detaching state
I0903 20:51:23.454833       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34814" resp=200
I0903 20:51:26.846479       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 4 items received
I0903 20:51:27.422600       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 77 items received
I0903 20:51:28.068031       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 10 items received
I0903 20:51:31.698209       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 8 items received
I0903 20:51:31.950111       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 4 items received
... skipping 24 lines ...
I0903 20:51:34.549030       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is bound to claim azuredisk-8582/pvc-tgthq
I0903 20:51:34.549142       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq found: phase: Bound, bound to: "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b", bindCompleted: true, boundByController: true
I0903 20:51:34.549319       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: all is bound
I0903 20:51:34.549432       1 pv_controller.go:858] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Bound
I0903 20:51:34.549527       1 pv_controller.go:861] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase Bound already set
I0903 20:51:34.550110       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3498
I0903 20:51:34.550388       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:51:34.550525       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:51:34.550623       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:51:34.550723       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:51:34.550837       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:51:34.550949       1 pv_controller.go:1231] deleteVolumeOperation [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4] started
I0903 20:51:34.551218       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-dwqs6" with version 3396
... skipping 34 lines ...
I0903 20:51:40.346204       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4
I0903 20:51:40.346240       1 pv_controller.go:1435] volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" deleted
I0903 20:51:40.346254       1 pv_controller.go:1283] deleteVolumeOperation [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: success
I0903 20:51:40.353481       1 pv_protection_controller.go:205] Got event on PV pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4
I0903 20:51:40.353704       1 pv_protection_controller.go:125] Processing PV pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4
I0903 20:51:40.354210       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" with version 3604
I0903 20:51:40.354408       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: phase: Failed, bound to: "azuredisk-8582/pvc-rqzfv (uid: 21fb3aec-8455-43bb-ae73-2ee23b8739c4)", boundByController: true
I0903 20:51:40.354841       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: volume is bound to claim azuredisk-8582/pvc-rqzfv
I0903 20:51:40.354994       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: claim azuredisk-8582/pvc-rqzfv not found
I0903 20:51:40.355135       1 pv_controller.go:1108] reclaimVolume[pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4]: policy is Delete
I0903 20:51:40.355250       1 pv_controller.go:1752] scheduleOperation[delete-pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4[35049108-471e-4a1a-898c-e719d12fd9e1]]
I0903 20:51:40.355417       1 pv_controller.go:1231] deleteVolumeOperation [pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4] started
I0903 20:51:40.359740       1 pv_controller.go:1243] Volume "pvc-21fb3aec-8455-43bb-ae73-2ee23b8739c4" is already being deleted
... skipping 45 lines ...
I0903 20:51:40.684537       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq not found
I0903 20:51:40.684544       1 pv_controller.go:1108] reclaimVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: policy is Delete
I0903 20:51:40.684554       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]]
I0903 20:51:40.684561       1 pv_controller.go:1763] operation "delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]" is already running, skipping
I0903 20:51:40.686308       1 pv_controller.go:1340] isVolumeReleased[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is released
I0903 20:51:40.686325       1 pv_controller.go:1404] doDeleteVolume [pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]
I0903 20:51:40.686401       1 pv_controller.go:1259] deletion of volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b) since it's in attaching or detaching state
I0903 20:51:40.686505       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Failed
I0903 20:51:40.686519       1 pv_controller.go:858] updating PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: set phase Failed
I0903 20:51:40.689396       1 pv_protection_controller.go:205] Got event on PV pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b
I0903 20:51:40.689427       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" with version 3612
I0903 20:51:40.689452       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase: Failed, bound to: "azuredisk-8582/pvc-tgthq (uid: 4a496a18-1860-4633-afcf-e9d4bd545f0b)", boundByController: true
I0903 20:51:40.689478       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is bound to claim azuredisk-8582/pvc-tgthq
I0903 20:51:40.689535       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq not found
I0903 20:51:40.689548       1 pv_controller.go:1108] reclaimVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: policy is Delete
I0903 20:51:40.689560       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]]
I0903 20:51:40.689570       1 pv_controller.go:1763] operation "delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]" is already running, skipping
I0903 20:51:40.689750       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" with version 3612
I0903 20:51:40.689771       1 pv_controller.go:879] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" entered phase "Failed"
I0903 20:51:40.689855       1 pv_controller.go:901] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b) since it's in attaching or detaching state
E0903 20:51:40.689974       1 goroutinemap.go:150] Operation for "delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]" failed. No retries permitted until 2022-09-03 20:51:41.189956405 +0000 UTC m=+1397.609738061 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b) since it's in attaching or detaching state
I0903 20:51:40.690099       1 event.go:291] "Event occurred" object="pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b) since it's in attaching or detaching state"
I0903 20:51:41.796996       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 5 items received
I0903 20:51:43.454148       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="172.402µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:33644" resp=200
I0903 20:51:45.726625       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0903 20:51:47.419951       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 10 items received
I0903 20:51:49.236055       1 azure_controller_vmss.go:187] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b) returned with <nil>
I0903 20:51:49.236103       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b) succeeded
... skipping 6 lines ...
I0903 20:51:49.548515       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: volume is bound to claim azuredisk-8582/pvc-dwqs6
I0903 20:51:49.548552       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: claim azuredisk-8582/pvc-dwqs6 found: phase: Bound, bound to: "pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e", bindCompleted: true, boundByController: true
I0903 20:51:49.548571       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: all is bound
I0903 20:51:49.548586       1 pv_controller.go:858] updating PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: set phase Bound
I0903 20:51:49.548598       1 pv_controller.go:861] updating PersistentVolume[pvc-f7ebcefa-e28a-4bae-9f17-f20623f88a6e]: phase Bound already set
I0903 20:51:49.548618       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" with version 3612
I0903 20:51:49.548643       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase: Failed, bound to: "azuredisk-8582/pvc-tgthq (uid: 4a496a18-1860-4633-afcf-e9d4bd545f0b)", boundByController: true
I0903 20:51:49.548667       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is bound to claim azuredisk-8582/pvc-tgthq
I0903 20:51:49.548692       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq not found
I0903 20:51:49.548701       1 pv_controller.go:1108] reclaimVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: policy is Delete
I0903 20:51:49.548724       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]]
I0903 20:51:49.548757       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b] started
I0903 20:51:49.549019       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-dwqs6" with version 3396
... skipping 27 lines ...
I0903 20:51:54.820013       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b
I0903 20:51:54.820049       1 pv_controller.go:1435] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" deleted
I0903 20:51:54.820063       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: success
I0903 20:51:54.825287       1 pv_protection_controller.go:205] Got event on PV pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b
I0903 20:51:54.825315       1 pv_protection_controller.go:125] Processing PV pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b
I0903 20:51:54.825576       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" with version 3633
I0903 20:51:54.825607       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: phase: Failed, bound to: "azuredisk-8582/pvc-tgthq (uid: 4a496a18-1860-4633-afcf-e9d4bd545f0b)", boundByController: true
I0903 20:51:54.825632       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: volume is bound to claim azuredisk-8582/pvc-tgthq
I0903 20:51:54.825667       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: claim azuredisk-8582/pvc-tgthq not found
I0903 20:51:54.825675       1 pv_controller.go:1108] reclaimVolume[pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b]: policy is Delete
I0903 20:51:54.825689       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]]
I0903 20:51:54.825695       1 pv_controller.go:1763] operation "delete-pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b[d0438bbc-2014-4416-9987-317d1409679a]" is already running, skipping
I0903 20:51:54.829563       1 pv_controller_base.go:235] volume "pvc-4a496a18-1860-4633-afcf-e9d4bd545f0b" deleted
... skipping 115 lines ...
I0903 20:52:09.359524       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-7051/pvc-npx7c[835946d6-a0a4-4207-ba2b-74119e33eb8e]]
I0903 20:52:09.359668       1 pv_controller.go:1763] operation "provision-azuredisk-7051/pvc-npx7c[835946d6-a0a4-4207-ba2b-74119e33eb8e]" is already running, skipping
I0903 20:52:09.362704       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-obexd2-dynamic-pvc-835946d6-a0a4-4207-ba2b-74119e33eb8e StorageAccountType:Standard_LRS Size:10
I0903 20:52:10.424033       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 63 items received
I0903 20:52:11.230472       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0903 20:52:11.247530       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-rcf5m, uid 30dcd913-8687-4349-87f5-f420166a454a, event type delete
E0903 20:52:11.263340       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-656w8" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0903 20:52:11.265545       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-6jdqw.171174d27ec8bd0e, uid 6c63e5cd-07c4-4e3e-b1f3-89fd1c73a967, event type delete
I0903 20:52:11.269513       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-6jdqw.171174d638b6b886, uid 93aebac1-5730-45c9-9747-1dcb5a0f1580, event type delete
I0903 20:52:11.279027       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-6jdqw.171174d90bd10bfe, uid 276af7f1-dfd6-4841-a4f7-7f355a39702c, event type delete
I0903 20:52:11.287050       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-6jdqw.171174db75b7f05e, uid 84a91d3e-4086-493e-b167-dbf59b4d3800, event type delete
I0903 20:52:11.288890       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-6jdqw.171174dbf2d717a1, uid 2beca500-a51a-4990-900e-4c7a48083e53, event type delete
I0903 20:52:11.291491       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-6jdqw.171174dbf5d12075, uid df727060-65d6-47b7-ad3f-d17d6d3cf744, event type delete
... skipping 102 lines ...
I0903 20:52:12.559790       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.2µs)
I0903 20:52:12.560091       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0903 20:52:12.573264       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (140.702691ms)
I0903 20:52:12.928436       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 33 items received
I0903 20:52:13.036050       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0903 20:52:13.063151       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-g8x5t, uid c8e22fe0-19d4-432b-9260-3a9681a18b2c, event type delete
E0903 20:52:13.074368       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-mljrv" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0903 20:52:13.099746       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0903 20:52:13.099814       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.5µs)
I0903 20:52:13.099750       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid e2c7e0e7-6ea1-42d7-99c4-bf8e72fc7c58, event type delete
I0903 20:52:13.110966       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid 5b231a11-8140-4fd7-9a27-18bba87a80a8, event type delete
I0903 20:52:13.113659       1 publisher.go:186] Finished syncing namespace "azuredisk-1387" (2.642136ms)
I0903 20:52:13.170722       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.901µs)
... skipping 465 lines ...
I0903 20:53:42.395994       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-ce443cf0-a065-43ab-9889-3557b51b86ed" lun 0 to node "capz-obexd2-mp-0000000".
I0903 20:53:42.396037       1 azure_controller_vmss.go:101] azureDisk - update(capz-obexd2): vm(capz-obexd2-mp-0000000) - attach disk(capz-obexd2-dynamic-pvc-ce443cf0-a065-43ab-9889-3557b51b86ed, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-obexd2/providers/Microsoft.Compute/disks/capz-obexd2-dynamic-pvc-ce443cf0-a065-43ab-9889-3557b51b86ed) with DiskEncryptionSetID()
I0903 20:53:42.998515       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 5 items received
I0903 20:53:43.463236       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="87.702µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40342" resp=200
I0903 20:53:43.682127       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0903 20:53:43.703638       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-t2h54, uid d6ea52c5-3977-42ce-947d-4e260b64575e, event type delete
E0903 20:53:43.717398       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-8rc57" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0903 20:53:43.727863       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid 77477fe5-73ed-4e01-97a7-b3cea20e0c50, event type delete
I0903 20:53:43.729133       1 publisher.go:186] Finished syncing namespace "azuredisk-7051" (1.637027ms)
I0903 20:53:43.747243       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0903 20:53:43.747361       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid 0a7e4819-7099-40e7-ae9c-9fcd1c43bdea, event type delete
I0903 20:53:43.747433       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (1.8µs)
I0903 20:53:43.760448       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-ll94m.171174f4d47a8a37, uid f46dc29e-b654-4e08-9129-ffe2c12d159c, event type delete
... skipping 451 lines ...
I0903 20:55:04.837797       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8154, estimate: 0, errors: <nil>
2022/09/03 20:55:04 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1301.033 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-s4vkz, container manager
STEP: Dumping workload cluster default/capz-obexd2 logs
Sep  3 20:57:02.218: INFO: Collecting logs for Linux node capz-obexd2-control-plane-xp4c2 in cluster capz-obexd2 in namespace default

Sep  3 20:58:02.221: INFO: Collecting boot logs for AzureMachine capz-obexd2-control-plane-xp4c2

Failed to get logs for machine capz-obexd2-control-plane-srk8t, cluster default/capz-obexd2: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 20:58:03.132: INFO: Collecting logs for Linux node capz-obexd2-mp-0000000 in cluster capz-obexd2 in namespace default

Sep  3 20:59:03.134: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-obexd2-mp-0

Sep  3 20:59:03.510: INFO: Collecting logs for Linux node capz-obexd2-mp-0000001 in cluster capz-obexd2 in namespace default

Sep  3 21:00:03.512: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-obexd2-mp-0

Failed to get logs for machine pool capz-obexd2-mp-0, cluster default/capz-obexd2: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-obexd2 kube-system pod logs
STEP: Fetching kube-system pod logs took 496.816023ms
STEP: Dumping workload cluster default/capz-obexd2 Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-obexd2-control-plane-xp4c2, container etcd
STEP: Collecting events for Pod kube-system/calico-node-l462m
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-r9699, container calico-kube-controllers
... skipping 9 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-zvcx9, container coredns
STEP: Collecting events for Pod kube-system/etcd-capz-obexd2-control-plane-xp4c2
STEP: Creating log watcher for controller kube-system/calico-node-t6tth, container calico-node
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-obexd2-control-plane-xp4c2
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-obexd2-control-plane-xp4c2, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-proxy-cvd4t
STEP: failed to find events of Pod "etcd-capz-obexd2-control-plane-xp4c2"
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-zvcx9
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-obexd2-control-plane-xp4c2
STEP: failed to find events of Pod "kube-scheduler-capz-obexd2-control-plane-xp4c2"
STEP: Creating log watcher for controller kube-system/kube-proxy-cvd4t, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-kzbct, container calico-node
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-obexd2-control-plane-xp4c2
STEP: failed to find events of Pod "kube-controller-manager-capz-obexd2-control-plane-xp4c2"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-obexd2-control-plane-xp4c2, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-obexd2-control-plane-xp4c2, container kube-controller-manager
STEP: failed to find events of Pod "kube-apiserver-capz-obexd2-control-plane-xp4c2"
STEP: Collecting events for Pod kube-system/kube-proxy-bk45j
STEP: Fetching activity logs took 1.832661793s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-obexd2" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
... skipping 13 lines ...