This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-04 20:09
Elapsed53m51s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 627 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-ynxxeg-kubeconfig; do sleep 1; done"
capz-ynxxeg-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-ynxxeg-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-ynxxeg-control-plane-lcqxk   NotReady   control-plane,master   9s    v1.22.14-rc.0.3+b89409c45e0dcb
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-ynxxeg-control-plane-lcqxk condition met
node/capz-ynxxeg-mp-0000000 condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:28:33.172: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-b6pp7" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  4 20:28:33.279: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 107.787641ms
Sep  4 20:28:35.388: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216440917s
Sep  4 20:28:37.497: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325733391s
Sep  4 20:28:39.607: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435073991s
Sep  4 20:28:41.715: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543598519s
Sep  4 20:28:43.824: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.652370122s
Sep  4 20:28:45.934: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.762159133s
Sep  4 20:28:48.043: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.871229538s
Sep  4 20:28:50.152: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.980664505s
Sep  4 20:28:52.269: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.09790452s
Sep  4 20:28:54.386: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.2144973s
Sep  4 20:28:56.503: INFO: Pod "azuredisk-volume-tester-b6pp7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.331733018s
STEP: Saw pod success
Sep  4 20:28:56.503: INFO: Pod "azuredisk-volume-tester-b6pp7" satisfied condition "Succeeded or Failed"
Sep  4 20:28:56.503: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-b6pp7"
Sep  4 20:28:56.627: INFO: Pod azuredisk-volume-tester-b6pp7 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-b6pp7 in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:28:56.965: INFO: deleting PVC "azuredisk-8081"/"pvc-57n6s"
Sep  4 20:28:56.965: INFO: Deleting PersistentVolumeClaim "pvc-57n6s"
STEP: waiting for claim's PV "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" to be deleted
Sep  4 20:28:57.075: INFO: Waiting up to 10m0s for PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 to get deleted
Sep  4 20:28:57.182: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (107.682179ms)
Sep  4 20:29:02.292: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (5.217068282s)
Sep  4 20:29:07.403: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (10.328374084s)
Sep  4 20:29:12.517: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (15.442659728s)
Sep  4 20:29:17.626: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (20.551510578s)
Sep  4 20:29:22.739: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (25.664395349s)
Sep  4 20:29:27.851: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 found and phase=Failed (30.776081939s)
Sep  4 20:29:32.962: INFO: PersistentVolume pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 was removed
Sep  4 20:29:32.962: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  4 20:29:33.070: INFO: Claim "azuredisk-8081" in namespace "pvc-57n6s" doesn't exist in the system
Sep  4 20:29:33.070: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-d898v
Sep  4 20:29:33.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  4 20:29:54.256: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-mhr5r"
Sep  4 20:29:54.366: INFO: Error getting logs for pod azuredisk-volume-tester-mhr5r: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-mhr5r)
STEP: Deleting pod azuredisk-volume-tester-mhr5r in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:29:54.692: INFO: deleting PVC "azuredisk-5466"/"pvc-2j5rr"
Sep  4 20:29:54.692: INFO: Deleting PersistentVolumeClaim "pvc-2j5rr"
STEP: waiting for claim's PV "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" to be deleted
... skipping 17 lines ...
Sep  4 20:31:16.702: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Bound (1m21.899571886s)
Sep  4 20:31:21.811: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Bound (1m27.00859505s)
Sep  4 20:31:26.922: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Bound (1m32.120003166s)
Sep  4 20:31:32.035: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Bound (1m37.232578309s)
Sep  4 20:31:37.147: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Bound (1m42.344760725s)
Sep  4 20:31:42.263: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Bound (1m47.460669567s)
Sep  4 20:31:47.375: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Failed (1m52.57280435s)
Sep  4 20:31:52.487: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Failed (1m57.685256075s)
Sep  4 20:31:57.599: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Failed (2m2.796900362s)
Sep  4 20:32:02.714: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Failed (2m7.911913321s)
Sep  4 20:32:07.823: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Failed (2m13.020975271s)
Sep  4 20:32:12.932: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 found and phase=Failed (2m18.129620382s)
Sep  4 20:32:18.043: INFO: PersistentVolume pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 was removed
Sep  4 20:32:18.043: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  4 20:32:18.151: INFO: Claim "azuredisk-5466" in namespace "pvc-2j5rr" doesn't exist in the system
Sep  4 20:32:18.151: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-lf8lb
Sep  4 20:32:18.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:32:20.238: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-v9t5d" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  4 20:32:20.346: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 108.053347ms
Sep  4 20:32:22.455: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21696803s
Sep  4 20:32:24.564: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326239153s
Sep  4 20:32:26.673: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434794289s
Sep  4 20:32:28.781: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.543092324s
Sep  4 20:32:30.889: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.651410872s
Sep  4 20:32:32.999: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.76117793s
Sep  4 20:32:35.107: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.869471837s
Sep  4 20:32:37.223: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.984818131s
Sep  4 20:32:39.339: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.101131503s
Sep  4 20:32:41.455: INFO: Pod "azuredisk-volume-tester-v9t5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.216885356s
STEP: Saw pod success
Sep  4 20:32:41.455: INFO: Pod "azuredisk-volume-tester-v9t5d" satisfied condition "Succeeded or Failed"
Sep  4 20:32:41.455: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-v9t5d"
Sep  4 20:32:41.578: INFO: Pod azuredisk-volume-tester-v9t5d has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-v9t5d in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:32:41.916: INFO: deleting PVC "azuredisk-2790"/"pvc-gnmxx"
Sep  4 20:32:41.916: INFO: Deleting PersistentVolumeClaim "pvc-gnmxx"
STEP: waiting for claim's PV "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" to be deleted
Sep  4 20:32:42.026: INFO: Waiting up to 10m0s for PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 to get deleted
Sep  4 20:32:42.135: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (109.599362ms)
Sep  4 20:32:47.252: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (5.226594327s)
Sep  4 20:32:52.363: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (10.337687187s)
Sep  4 20:32:57.475: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (15.449739449s)
Sep  4 20:33:02.587: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (20.561619601s)
Sep  4 20:33:07.696: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (25.670711509s)
Sep  4 20:33:12.807: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 found and phase=Failed (30.781398907s)
Sep  4 20:33:17.919: INFO: PersistentVolume pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 was removed
Sep  4 20:33:17.919: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  4 20:33:18.028: INFO: Claim "azuredisk-2790" in namespace "pvc-gnmxx" doesn't exist in the system
Sep  4 20:33:18.028: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-r7zld
Sep  4 20:33:18.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  4 20:33:20.040: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mt962" in namespace "azuredisk-5356" to be "Error status code"
Sep  4 20:33:20.148: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 108.079668ms
Sep  4 20:33:22.257: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217542362s
Sep  4 20:33:24.367: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 4.32694209s
Sep  4 20:33:26.476: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 6.436550535s
Sep  4 20:33:28.585: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 8.544862012s
Sep  4 20:33:30.697: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 10.656773234s
Sep  4 20:33:32.807: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 12.767527442s
Sep  4 20:33:34.917: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 14.877631873s
Sep  4 20:33:37.034: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 16.993711904s
Sep  4 20:33:39.153: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Pending", Reason="", readiness=false. Elapsed: 19.113173563s
Sep  4 20:33:41.269: INFO: Pod "azuredisk-volume-tester-mt962": Phase="Failed", Reason="", readiness=false. Elapsed: 21.229627971s
STEP: Saw pod failure
Sep  4 20:33:41.270: INFO: Pod "azuredisk-volume-tester-mt962" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  4 20:33:41.380: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-mt962"
Sep  4 20:33:41.493: INFO: Pod azuredisk-volume-tester-mt962 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-mt962 in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:33:41.836: INFO: deleting PVC "azuredisk-5356"/"pvc-lnx62"
Sep  4 20:33:41.836: INFO: Deleting PersistentVolumeClaim "pvc-lnx62"
STEP: waiting for claim's PV "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" to be deleted
Sep  4 20:33:41.946: INFO: Waiting up to 10m0s for PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa to get deleted
Sep  4 20:33:42.054: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (107.906528ms)
Sep  4 20:33:47.165: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (5.219782751s)
Sep  4 20:33:52.276: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (10.330035094s)
Sep  4 20:33:57.389: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (15.443830168s)
Sep  4 20:34:02.508: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (20.562016953s)
Sep  4 20:34:07.620: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (25.673873711s)
Sep  4 20:34:12.731: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa found and phase=Failed (30.785615456s)
Sep  4 20:34:17.844: INFO: PersistentVolume pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa was removed
Sep  4 20:34:17.844: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  4 20:34:17.953: INFO: Claim "azuredisk-5356" in namespace "pvc-lnx62" doesn't exist in the system
Sep  4 20:34:17.953: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-6fw5q
Sep  4 20:34:18.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 53 lines ...
Sep  4 20:35:25.316: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Bound (5.223167127s)
Sep  4 20:35:30.430: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Bound (10.336921265s)
Sep  4 20:35:35.548: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Bound (15.45487701s)
Sep  4 20:35:40.662: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Bound (20.568850975s)
Sep  4 20:35:45.773: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Bound (25.679961471s)
Sep  4 20:35:50.886: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Bound (30.793400055s)
Sep  4 20:35:56.000: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (35.906655563s)
Sep  4 20:36:01.113: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (41.020331758s)
Sep  4 20:36:06.224: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (46.130890073s)
Sep  4 20:36:11.337: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (51.243676056s)
Sep  4 20:36:16.450: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (56.357160546s)
Sep  4 20:36:21.564: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (1m1.470965562s)
Sep  4 20:36:26.679: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 found and phase=Failed (1m6.585570919s)
Sep  4 20:36:31.791: INFO: PersistentVolume pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 was removed
Sep  4 20:36:31.791: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:36:31.900: INFO: Claim "azuredisk-5194" in namespace "pvc-zfps8" doesn't exist in the system
Sep  4 20:36:31.900: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-b7bfm
Sep  4 20:36:32.012: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-7fbxt"
Sep  4 20:36:32.122: INFO: Pod azuredisk-volume-tester-7fbxt has the following logs: 
... skipping 7 lines ...
Sep  4 20:36:32.682: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Bound (109.85807ms)
Sep  4 20:36:37.792: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Bound (5.219969133s)
Sep  4 20:36:42.904: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Bound (10.33157994s)
Sep  4 20:36:48.016: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Bound (15.444373695s)
Sep  4 20:36:53.129: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Bound (20.557294074s)
Sep  4 20:36:58.241: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Bound (25.669477078s)
Sep  4 20:37:03.352: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (30.779747909s)
Sep  4 20:37:08.466: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (35.893626297s)
Sep  4 20:37:13.575: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (41.003479877s)
Sep  4 20:37:18.687: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (46.114870907s)
Sep  4 20:37:23.798: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (51.226245384s)
Sep  4 20:37:28.907: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (56.335223975s)
Sep  4 20:37:34.020: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (1m1.44809835s)
Sep  4 20:37:39.132: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c found and phase=Failed (1m6.559583878s)
Sep  4 20:37:44.241: INFO: PersistentVolume pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c was removed
Sep  4 20:37:44.242: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:37:44.350: INFO: Claim "azuredisk-5194" in namespace "pvc-b8svj" doesn't exist in the system
Sep  4 20:37:44.350: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-n4mg6
Sep  4 20:37:44.460: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-fs9xm"
Sep  4 20:37:44.593: INFO: Pod azuredisk-volume-tester-fs9xm has the following logs: 
... skipping 8 lines ...
Sep  4 20:37:50.262: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Bound (5.221519538s)
Sep  4 20:37:55.372: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Bound (10.331531954s)
Sep  4 20:38:00.484: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Bound (15.443590107s)
Sep  4 20:38:05.593: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Bound (20.552891886s)
Sep  4 20:38:10.704: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Bound (25.66336655s)
Sep  4 20:38:15.816: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Bound (30.775598744s)
Sep  4 20:38:20.927: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Failed (35.886740871s)
Sep  4 20:38:26.039: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Failed (40.998895676s)
Sep  4 20:38:31.152: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Failed (46.111454611s)
Sep  4 20:38:36.263: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Failed (51.222846486s)
Sep  4 20:38:41.376: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 found and phase=Failed (56.335395001s)
Sep  4 20:38:46.487: INFO: PersistentVolume pvc-d73a6518-8d09-4568-844a-6993ced4e017 was removed
Sep  4 20:38:46.487: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:38:46.596: INFO: Claim "azuredisk-5194" in namespace "pvc-l8mpx" doesn't exist in the system
Sep  4 20:38:46.596: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-nvwkw
Sep  4 20:38:46.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 59 lines ...
Sep  4 20:40:18.427: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Bound (5.220507537s)
Sep  4 20:40:23.537: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Bound (10.330087101s)
Sep  4 20:40:28.649: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Bound (15.442561798s)
Sep  4 20:40:33.760: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Bound (20.553089512s)
Sep  4 20:40:38.871: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Bound (25.664803649s)
Sep  4 20:40:43.987: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Bound (30.78011009s)
Sep  4 20:40:49.099: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (35.892641474s)
Sep  4 20:40:54.219: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (41.012364094s)
Sep  4 20:40:59.331: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (46.124619065s)
Sep  4 20:41:04.443: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (51.236352188s)
Sep  4 20:41:09.555: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (56.348567617s)
Sep  4 20:41:14.669: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (1m1.462106234s)
Sep  4 20:41:19.780: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (1m6.573362829s)
Sep  4 20:41:24.892: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 found and phase=Failed (1m11.685497457s)
Sep  4 20:41:30.002: INFO: PersistentVolume pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 was removed
Sep  4 20:41:30.002: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  4 20:41:30.111: INFO: Claim "azuredisk-1353" in namespace "pvc-f798z" doesn't exist in the system
Sep  4 20:41:30.111: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-79947
Sep  4 20:41:30.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:41:54.419: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-r694j" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  4 20:41:54.528: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 108.341771ms
Sep  4 20:41:56.638: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.218291048s
Sep  4 20:41:58.753: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333958024s
Sep  4 20:42:00.870: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450729642s
Sep  4 20:42:02.986: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.566504127s
Sep  4 20:42:05.102: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.682541671s
... skipping 10 lines ...
Sep  4 20:42:28.378: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 33.958284003s
Sep  4 20:42:30.493: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 36.074202119s
Sep  4 20:42:32.609: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 38.189677051s
Sep  4 20:42:34.725: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Pending", Reason="", readiness=false. Elapsed: 40.305288526s
Sep  4 20:42:36.840: INFO: Pod "azuredisk-volume-tester-r694j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.421063049s
STEP: Saw pod success
Sep  4 20:42:36.840: INFO: Pod "azuredisk-volume-tester-r694j" satisfied condition "Succeeded or Failed"
Sep  4 20:42:36.840: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-r694j"
Sep  4 20:42:36.959: INFO: Pod azuredisk-volume-tester-r694j has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-r694j in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:42:37.298: INFO: deleting PVC "azuredisk-59"/"pvc-xcbbw"
Sep  4 20:42:37.298: INFO: Deleting PersistentVolumeClaim "pvc-xcbbw"
STEP: waiting for claim's PV "pvc-8e7977b0-e355-4334-a34a-5c472b859209" to be deleted
Sep  4 20:42:37.409: INFO: Waiting up to 10m0s for PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 to get deleted
Sep  4 20:42:37.517: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (108.417931ms)
Sep  4 20:42:42.627: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (5.218151475s)
Sep  4 20:42:47.741: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (10.331973454s)
Sep  4 20:42:52.851: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (15.442058483s)
Sep  4 20:42:57.963: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (20.553719789s)
Sep  4 20:43:03.075: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (25.665837964s)
Sep  4 20:43:08.187: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (30.777880786s)
Sep  4 20:43:13.299: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (35.890170727s)
Sep  4 20:43:18.408: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (40.999294544s)
Sep  4 20:43:23.522: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (46.112450536s)
Sep  4 20:43:28.631: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (51.222370765s)
Sep  4 20:43:33.746: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (56.336566595s)
Sep  4 20:43:38.855: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 found and phase=Failed (1m1.446130448s)
Sep  4 20:43:43.964: INFO: PersistentVolume pvc-8e7977b0-e355-4334-a34a-5c472b859209 was removed
Sep  4 20:43:43.965: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  4 20:43:44.073: INFO: Claim "azuredisk-59" in namespace "pvc-xcbbw" doesn't exist in the system
Sep  4 20:43:44.073: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-fxdvc
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:44:08.150: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-sg4hv" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  4 20:44:08.259: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 108.632355ms
Sep  4 20:44:10.368: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217799335s
Sep  4 20:44:12.477: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326838607s
Sep  4 20:44:14.588: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.43753668s
Sep  4 20:44:16.698: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.547870668s
Sep  4 20:44:18.808: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.657667601s
... skipping 6 lines ...
Sep  4 20:44:33.596: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 25.445836772s
Sep  4 20:44:35.706: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 27.555420543s
Sep  4 20:44:37.817: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Pending", Reason="", readiness=false. Elapsed: 29.666203874s
Sep  4 20:44:39.932: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Running", Reason="", readiness=false. Elapsed: 31.781422447s
Sep  4 20:44:42.048: INFO: Pod "azuredisk-volume-tester-sg4hv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.897541272s
STEP: Saw pod success
Sep  4 20:44:42.048: INFO: Pod "azuredisk-volume-tester-sg4hv" satisfied condition "Succeeded or Failed"
Sep  4 20:44:42.048: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-sg4hv"
Sep  4 20:44:42.173: INFO: Pod azuredisk-volume-tester-sg4hv has the following logs: hello world
100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.062678 seconds, 1.6GB/s

STEP: Deleting pod azuredisk-volume-tester-sg4hv in namespace azuredisk-2546
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:44:42.519: INFO: deleting PVC "azuredisk-2546"/"pvc-8g4kg"
Sep  4 20:44:42.520: INFO: Deleting PersistentVolumeClaim "pvc-8g4kg"
STEP: waiting for claim's PV "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" to be deleted
Sep  4 20:44:42.630: INFO: Waiting up to 10m0s for PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 to get deleted
Sep  4 20:44:42.739: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (108.68634ms)
Sep  4 20:44:47.852: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (5.221119457s)
Sep  4 20:44:52.968: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (10.337514966s)
Sep  4 20:44:58.079: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (15.448631474s)
Sep  4 20:45:03.191: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (20.560596245s)
Sep  4 20:45:08.302: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (25.671150993s)
Sep  4 20:45:13.414: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 found and phase=Failed (30.783915365s)
Sep  4 20:45:18.528: INFO: PersistentVolume pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 was removed
Sep  4 20:45:18.528: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  4 20:45:18.636: INFO: Claim "azuredisk-2546" in namespace "pvc-8g4kg" doesn't exist in the system
Sep  4 20:45:18.636: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-r7qdf
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:45:35.275: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ct98k" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  4 20:45:35.386: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 111.29331ms
Sep  4 20:45:37.495: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220417258s
Sep  4 20:45:39.611: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.335877529s
Sep  4 20:45:41.726: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450971316s
Sep  4 20:45:43.841: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.56656923s
Sep  4 20:45:45.956: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.681464196s
... skipping 9 lines ...
Sep  4 20:46:07.116: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 31.840645875s
Sep  4 20:46:09.232: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 33.957421052s
Sep  4 20:46:11.349: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 36.074078746s
Sep  4 20:46:13.466: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Pending", Reason="", readiness=false. Elapsed: 38.190723072s
Sep  4 20:46:15.581: INFO: Pod "azuredisk-volume-tester-ct98k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.306450059s
STEP: Saw pod success
Sep  4 20:46:15.582: INFO: Pod "azuredisk-volume-tester-ct98k" satisfied condition "Succeeded or Failed"
Sep  4 20:46:15.582: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-ct98k"
Sep  4 20:46:15.705: INFO: Pod azuredisk-volume-tester-ct98k has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-ct98k in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:46:16.047: INFO: deleting PVC "azuredisk-8582"/"pvc-lf8pq"
Sep  4 20:46:16.047: INFO: Deleting PersistentVolumeClaim "pvc-lf8pq"
STEP: waiting for claim's PV "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" to be deleted
Sep  4 20:46:16.163: INFO: Waiting up to 10m0s for PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf to get deleted
Sep  4 20:46:16.272: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (108.376832ms)
Sep  4 20:46:21.384: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (5.220898916s)
Sep  4 20:46:26.503: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (10.339464532s)
Sep  4 20:46:31.612: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (15.448981658s)
Sep  4 20:46:36.724: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (20.560809768s)
Sep  4 20:46:41.836: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (25.673197279s)
Sep  4 20:46:46.949: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (30.78599328s)
Sep  4 20:46:52.060: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (35.896328944s)
Sep  4 20:46:57.169: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (41.006206773s)
Sep  4 20:47:02.296: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (46.132691091s)
Sep  4 20:47:07.408: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (51.244582379s)
Sep  4 20:47:12.520: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (56.356740166s)
Sep  4 20:47:17.637: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (1m1.474035785s)
Sep  4 20:47:22.748: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (1m6.584384421s)
Sep  4 20:47:27.861: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf found and phase=Failed (1m11.697558005s)
Sep  4 20:47:32.972: INFO: PersistentVolume pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf was removed
Sep  4 20:47:32.982: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  4 20:47:33.093: INFO: Claim "azuredisk-8582" in namespace "pvc-lf8pq" doesn't exist in the system
Sep  4 20:47:33.093: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-vhcbt
STEP: validating provisioned PV
STEP: checking the PV
... skipping 404 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  4 20:51:02.661: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [1.032 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 247 lines ...
I0904 20:24:29.108958       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-09-04 20:17:29 +0000 UTC to 2032-09-01 20:22:29 +0000 UTC (now=2022-09-04 20:24:29.108929721 +0000 UTC))"
I0904 20:24:29.118002       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662323068\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662323068\" (2022-09-04 19:24:27 +0000 UTC to 2023-09-04 19:24:27 +0000 UTC (now=2022-09-04 20:24:29.11794314 +0000 UTC))"
I0904 20:24:29.118496       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662323069\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662323068\" (2022-09-04 19:24:28 +0000 UTC to 2023-09-04 19:24:28 +0000 UTC (now=2022-09-04 20:24:29.118464152 +0000 UTC))"
I0904 20:24:29.118679       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0904 20:24:29.119295       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0904 20:24:29.119887       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0904 20:24:31.145649       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 20:24:31.145854       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0904 20:24:34.107962       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0904 20:24:34.108403       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-ynxxeg-control-plane-lcqxk_4f20b9dc-76c7-492e-9874-ec3137a49640 became leader"
W0904 20:24:34.156553       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0904 20:24:34.157215       1 azure_auth.go:232] Using AzurePublicCloud environment
I0904 20:24:34.157260       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0904 20:24:34.157321       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0904 20:24:34.159529       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0904 20:24:34.159791       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0904 20:24:34.159508       1 reflector.go:219] Starting reflector *v1.ServiceAccount (17h0m32.75733176s) from k8s.io/client-go/informers/factory.go:134
I0904 20:24:34.159869       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0904 20:24:34.160009       1 reflector.go:219] Starting reflector *v1.Secret (17h0m32.75733176s) from k8s.io/client-go/informers/factory.go:134
I0904 20:24:34.160109       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0904 20:24:34.194659       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 20:24:34.194864       1 controllermanager.go:562] Starting "endpointslicemirroring"
I0904 20:24:34.208216       1 controllermanager.go:577] Started "endpointslicemirroring"
I0904 20:24:34.208404       1 controllermanager.go:562] Starting "replicaset"
I0904 20:24:34.208371       1 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
I0904 20:24:34.208616       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
I0904 20:24:34.214241       1 controllermanager.go:577] Started "replicaset"
... skipping 58 lines ...
I0904 20:24:34.871708       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 20:24:34.871780       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 20:24:34.871817       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0904 20:24:34.871906       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0904 20:24:34.871987       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0904 20:24:34.872077       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 20:24:34.872118       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 20:24:34.872197       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 20:24:34.872385       1 controllermanager.go:577] Started "attachdetach"
I0904 20:24:34.872403       1 controllermanager.go:562] Starting "pv-protection"
I0904 20:24:34.872504       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 20:24:34.872518       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0904 20:24:35.012900       1 controllermanager.go:577] Started "pv-protection"
... skipping 160 lines ...
I0904 20:24:38.062502       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0904 20:24:38.062516       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0904 20:24:38.062533       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0904 20:24:38.062552       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 20:24:38.062568       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0904 20:24:38.062585       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0904 20:24:38.062605       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 20:24:38.062619       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 20:24:38.062678       1 controllermanager.go:577] Started "persistentvolume-binder"
I0904 20:24:38.062696       1 controllermanager.go:562] Starting "root-ca-cert-publisher"
I0904 20:24:38.063031       1 pv_controller_base.go:308] Starting persistent volume controller
I0904 20:24:38.063051       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0904 20:24:38.212890       1 controllermanager.go:577] Started "root-ca-cert-publisher"
... skipping 291 lines ...
I0904 20:24:39.213889       1 shared_informer.go:247] Caches are synced for garbage collector 
I0904 20:24:39.213900       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0904 20:24:39.659825       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="83.501µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49908" resp=200
I0904 20:24:40.619566       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-ynxxeg-control-plane-lcqxk}
I0904 20:24:40.619983       1 taint_manager.go:440] "Updating known taints on node" node="capz-ynxxeg-control-plane-lcqxk" taints=[]
I0904 20:24:40.620539       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-control-plane-lcqxk"
W0904 20:24:40.620607       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-ynxxeg-control-plane-lcqxk" does not exist
I0904 20:24:40.620650       1 controller.go:693] Ignoring node capz-ynxxeg-control-plane-lcqxk with Ready condition status False
I0904 20:24:40.620727       1 controller.go:272] Triggering nodeSync
I0904 20:24:40.620752       1 controller.go:291] nodeSync has been triggered
I0904 20:24:40.620773       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:24:40.620809       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:24:40.620849       1 controller.go:731] It took 7.5802e-05 seconds to finish nodeSyncInternal
... skipping 35 lines ...
I0904 20:24:44.018359       1 certificate_controller.go:82] Adding certificate request csr-vnv7d
I0904 20:24:44.019802       1 certificate_controller.go:173] Finished syncing certificate request "csr-vnv7d" (5.2µs)
I0904 20:24:44.019884       1 certificate_controller.go:173] Finished syncing certificate request "csr-vnv7d" (4.6µs)
I0904 20:24:44.019884       1 certificate_controller.go:173] Finished syncing certificate request "csr-vnv7d" (5.7µs)
I0904 20:24:44.019484       1 certificate_controller.go:82] Adding certificate request csr-vnv7d
I0904 20:24:44.034010       1 certificate_controller.go:173] Finished syncing certificate request "csr-vnv7d" (14.051908ms)
I0904 20:24:44.034042       1 certificate_controller.go:151] Sync csr-vnv7d failed with : recognized csr "csr-vnv7d" as [selfnodeclient nodeclient] but subject access review was not approved
I0904 20:24:44.055646       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-control-plane-lcqxk"
I0904 20:24:44.055945       1 controller.go:272] Triggering nodeSync
I0904 20:24:44.055952       1 controller.go:291] nodeSync has been triggered
I0904 20:24:44.056269       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:24:44.056378       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:24:44.056472       1 controller.go:731] It took 0.000205203 seconds to finish nodeSyncInternal
... skipping 34 lines ...
I0904 20:24:44.590488       1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (55.701µs)
I0904 20:24:44.593849       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (51.889868ms)
I0904 20:24:44.594023       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (51.674366ms)
I0904 20:24:44.596926       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 20:24:44.577829007 +0000 UTC m=+17.025376975 - now: 2022-09-04 20:24:44.59691739 +0000 UTC m=+17.044465358]
I0904 20:24:44.597378       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0904 20:24:44.609250       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="82.143116ms"
I0904 20:24:44.609298       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:24:44.609331       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:24:44.609314174 +0000 UTC m=+17.056862142"
I0904 20:24:44.610218       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 20:24:44 +0000 UTC - now: 2022-09-04 20:24:44.610211287 +0000 UTC m=+17.057759255]
I0904 20:24:44.613241       1 controller_utils.go:581] Controller coredns-78fcd69978 created pod coredns-78fcd69978-prjlb
I0904 20:24:44.613685       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-prjlb"
I0904 20:24:44.613886       1 replica_set.go:380] Pod coredns-78fcd69978-prjlb created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-78fcd69978-prjlb", GenerateName:"coredns-78fcd69978-", Namespace:"kube-system", SelfLink:"", UID:"3bec4a17-dd44-4623-a8e0-7c8d7b02ffe2", ResourceVersion:"421", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63797919884, loc:(*time.Location)(0x751a1a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"78fcd69978"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"coredns-78fcd69978", UID:"f5c6e9d8-0f28-455e-a87b-64b3d8cdf5e4", Controller:(*bool)(0xc000a8a8af), BlockOwnerDeletion:(*bool)(0xc000a8a910)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001243a40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001243a58), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"config-volume", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc0003c4880), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-mvfl2", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00079c2c0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"coredns", Image:"k8s.gcr.io/coredns/coredns:v1.8.4", Command:[]string(nil), Args:[]string{"-conf", "/etc/coredns/Corefile"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"dns", HostPort:0, ContainerPort:53, Protocol:"UDP", HostIP:""}, v1.ContainerPort{Name:"dns-tcp", HostPort:0, ContainerPort:53, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:0, ContainerPort:9153, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:178257920, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"170Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:73400320, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"70Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"config-volume", ReadOnly:true, MountPath:"/etc/coredns", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mvfl2", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0003c4a80), ReadinessProbe:(*v1.Probe)(0xc0003c4ac0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00064d0e0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000a8b0e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"coredns", DeprecatedServiceAccount:"coredns", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00033e8c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a8b390)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000a8b3d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc000a8b3d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000a8b3dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0018703a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 20:24:44.614272       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0bd6043224dc389, ext:17023069641, loc:(*time.Location)(0x751a1a0)}}
... skipping 141 lines ...
I0904 20:24:52.135549       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 20:24:52.126721345 +0000 UTC m=+24.574269413 - now: 2022-09-04 20:24:52.135544592 +0000 UTC m=+24.583092660]
I0904 20:24:52.135824       1 controller_utils.go:581] Controller calico-kube-controllers-969cf87c4 created pod calico-kube-controllers-969cf87c4-tjddb
I0904 20:24:52.135994       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 20:24:52.136456       1 event.go:291] "Event occurred" object="kube-system/calico-kube-controllers-969cf87c4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-969cf87c4-tjddb"
I0904 20:24:52.140435       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-969cf87c4"
I0904 20:24:52.142048       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="20.729944ms"
I0904 20:24:52.142205       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:24:52.142352       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 20:24:52.142334905 +0000 UTC m=+24.589882873"
I0904 20:24:52.142751       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 20:24:52 +0000 UTC - now: 2022-09-04 20:24:52.142746411 +0000 UTC m=+24.590294479]
I0904 20:24:52.142959       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (17.184086ms)
I0904 20:24:52.142987       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd6045078229f4, ext:24573519000, loc:(*time.Location)(0x751a1a0)}}
I0904 20:24:52.143099       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 20:24:52.151576       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-969cf87c4"
... skipping 524 lines ...
I0904 20:25:22.588960       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0904 20:25:22.589105       1 daemon_controller.go:1102] Updating daemon set status
I0904 20:25:22.589270       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.09824ms)
I0904 20:25:23.240208       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (248.908µs)
I0904 20:25:23.558458       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:25:23.574777       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:25:23.626321       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-ynxxeg-control-plane-lcqxk transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:25:00 +0000 UTC,LastTransitionTime:2022-09-04 20:24:17 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:25:20 +0000 UTC,LastTransitionTime:2022-09-04 20:25:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:25:23.626401       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-control-plane-lcqxk ReadyCondition updated. Updating timestamp.
I0904 20:25:23.626428       1 node_lifecycle_controller.go:893] Node capz-ynxxeg-control-plane-lcqxk is healthy again, removing all taints
I0904 20:25:23.626446       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 20:25:28.297353       1 replica_set.go:443] Pod calico-kube-controllers-969cf87c4-tjddb updated, objectMeta {Name:calico-kube-controllers-969cf87c4-tjddb GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:a5a89c35-dd69-454f-8b1a-ef992273b542 ResourceVersion:663 Generation:0 CreationTimestamp:2022-09-04 20:24:52 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[cni.projectcalico.org/containerID:d979feee9ced5f905c05b684d16b983525a0a3ad89b3e80b0e05abbf8ac35f1a cni.projectcalico.org/podIP:192.168.55.66/32 cni.projectcalico.org/podIPs:192.168.55.66/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:7471c264-eda5-4a77-a999-0be280118a5e Controller:0xc001857e20 BlockOwnerDeletion:0xc001857e21}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:24:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7471c264-eda5-4a77-a999-0be280118a5e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 20:24:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 20:25:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 20:25:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-kube-controllers-969cf87c4-tjddb GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:a5a89c35-dd69-454f-8b1a-ef992273b542 ResourceVersion:703 Generation:0 CreationTimestamp:2022-09-04 20:24:52 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[cni.projectcalico.org/containerID:d979feee9ced5f905c05b684d16b983525a0a3ad89b3e80b0e05abbf8ac35f1a cni.projectcalico.org/podIP:192.168.55.66/32 cni.projectcalico.org/podIPs:192.168.55.66/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:7471c264-eda5-4a77-a999-0be280118a5e Controller:0xc001a67167 BlockOwnerDeletion:0xc001a67168}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:24:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7471c264-eda5-4a77-a999-0be280118a5e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 20:24:52 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 20:25:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 20:25:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.55.66\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0904 20:25:28.297544       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd6045078229f4, ext:24573519000, loc:(*time.Location)(0x751a1a0)}}
I0904 20:25:28.297630       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (95.603µs)
... skipping 103 lines ...
I0904 20:26:18.573085       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:26:18.573094       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:26:18.573100       1 controller.go:731] It took 2.04e-05 seconds to finish nodeSyncInternal
I0904 20:26:18.629776       1 gc_controller.go:161] GC'ing orphaned
I0904 20:26:18.629806       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:26:20.397304       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000000"
W0904 20:26:20.397439       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-ynxxeg-mp-0000000" does not exist
I0904 20:26:20.397602       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-ynxxeg-mp-0000000}
I0904 20:26:20.397691       1 taint_manager.go:440] "Updating known taints on node" node="capz-ynxxeg-mp-0000000" taints=[]
I0904 20:26:20.397773       1 controller.go:693] Ignoring node capz-ynxxeg-mp-0000000 with Ready condition status False
I0904 20:26:20.399201       1 controller.go:272] Triggering nodeSync
I0904 20:26:20.399271       1 controller.go:291] nodeSync has been triggered
I0904 20:26:20.402655       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
... skipping 113 lines ...
I0904 20:26:21.285505       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd605b24600517, ext:113057820603, loc:(*time.Location)(0x751a1a0)}}
I0904 20:26:21.286589       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd605b234ca0a4, ext:113039772488, loc:(*time.Location)(0x751a1a0)}}
I0904 20:26:21.288481       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd605b5131c65c, ext:113736022684, loc:(*time.Location)(0x751a1a0)}}
I0904 20:26:21.288585       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-ynxxeg-mp-0000001], creating 1
I0904 20:26:21.289046       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:26:21.289482       1 controller.go:731] It took 0.001698821 seconds to finish nodeSyncInternal
W0904 20:26:21.289387       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-ynxxeg-mp-0000001" does not exist
I0904 20:26:21.289456       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd605b5140b31b, ext:113737000895, loc:(*time.Location)(0x751a1a0)}}
I0904 20:26:21.289796       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-ynxxeg-mp-0000001], creating 1
I0904 20:26:21.295397       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/kube-proxy-977w5"
I0904 20:26:21.295456       1 disruption.go:415] addPod called on pod "kube-proxy-977w5"
I0904 20:26:21.295582       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-977w5, PodDisruptionBudget controller will avoid syncing.
I0904 20:26:21.295597       1 disruption.go:418] No matching pdb for pod "kube-proxy-977w5"
... skipping 369 lines ...
I0904 20:26:41.400759       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:26:41.400768       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:26:41.400791       1 controller.go:760] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 20:26:41.400848       1 controller.go:731] It took 4.6e-05 seconds to finish nodeSyncInternal
I0904 20:26:41.412394       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:26:41.412446       1 controller_utils.go:221] Made sure that Node capz-ynxxeg-mp-0000001 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0904 20:26:43.638031       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-ynxxeg-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:26:31 +0000 UTC,LastTransitionTime:2022-09-04 20:26:21 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:26:41 +0000 UTC,LastTransitionTime:2022-09-04 20:26:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:26:43.638102       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:26:43.648486       1 node_lifecycle_controller.go:893] Node capz-ynxxeg-mp-0000001 is healthy again, removing all taints
I0904 20:26:43.649541       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-ynxxeg-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:26:30 +0000 UTC,LastTransitionTime:2022-09-04 20:26:20 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:26:40 +0000 UTC,LastTransitionTime:2022-09-04 20:26:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:26:43.649678       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:26:43.650290       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:26:43.650461       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-ynxxeg-mp-0000001}
I0904 20:26:43.650530       1 taint_manager.go:440] "Updating known taints on node" node="capz-ynxxeg-mp-0000001" taints=[]
I0904 20:26:43.650599       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-ynxxeg-mp-0000001"
I0904 20:26:43.656814       1 node_lifecycle_controller.go:893] Node capz-ynxxeg-mp-0000000 is healthy again, removing all taints
... skipping 400 lines ...
I0904 20:28:57.051459       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: claim azuredisk-8081/pvc-57n6s not found
I0904 20:28:57.051573       1 pv_controller.go:1108] reclaimVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: policy is Delete
I0904 20:28:57.051664       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]]
I0904 20:28:57.051677       1 pv_controller.go:1763] operation "delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]" is already running, skipping
I0904 20:28:57.053199       1 pv_controller.go:1340] isVolumeReleased[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is released
I0904 20:28:57.053215       1 pv_controller.go:1404] doDeleteVolume [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]
I0904 20:28:57.082806       1 pv_controller.go:1259] deletion of volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:28:57.082832       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: set phase Failed
I0904 20:28:57.082846       1 pv_controller.go:858] updating PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: set phase Failed
I0904 20:28:57.086112       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" with version 1176
I0904 20:28:57.086140       1 pv_controller.go:879] volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" entered phase "Failed"
I0904 20:28:57.086172       1 pv_controller.go:901] volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
E0904 20:28:57.086302       1 goroutinemap.go:150] Operation for "delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]" failed. No retries permitted until 2022-09-04 20:28:57.586282205 +0000 UTC m=+270.033830173 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:28:57.086562       1 event.go:291] "Event occurred" object="pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted"
I0904 20:28:57.086859       1 pv_protection_controller.go:205] Got event on PV pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2
I0904 20:28:57.087019       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" with version 1176
I0904 20:28:57.087185       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: phase: Failed, bound to: "azuredisk-8081/pvc-57n6s (uid: e9603c6d-5db3-4175-a6b7-5a998d0248f2)", boundByController: true
I0904 20:28:57.087343       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is bound to claim azuredisk-8081/pvc-57n6s
I0904 20:28:57.087469       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: claim azuredisk-8081/pvc-57n6s not found
I0904 20:28:57.087643       1 pv_controller.go:1108] reclaimVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: policy is Delete
I0904 20:28:57.087749       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]]
I0904 20:28:57.087859       1 pv_controller.go:1765] operation "delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]" postponed due to exponential backoff
I0904 20:28:58.634919       1 gc_controller.go:161] GC'ing orphaned
... skipping 12 lines ...
I0904 20:29:00.882807       1 azure_controller_vmss.go:175] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2)
I0904 20:29:03.677589       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:29:08.566463       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:29:08.567546       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:29:08.584920       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:29:08.585042       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" with version 1176
I0904 20:29:08.585359       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: phase: Failed, bound to: "azuredisk-8081/pvc-57n6s (uid: e9603c6d-5db3-4175-a6b7-5a998d0248f2)", boundByController: true
I0904 20:29:08.585617       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is bound to claim azuredisk-8081/pvc-57n6s
I0904 20:29:08.585684       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: claim azuredisk-8081/pvc-57n6s not found
I0904 20:29:08.585856       1 pv_controller.go:1108] reclaimVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: policy is Delete
I0904 20:29:08.585912       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]]
I0904 20:29:08.586137       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2] started
I0904 20:29:08.589541       1 pv_controller.go:1340] isVolumeReleased[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is released
I0904 20:29:08.589561       1 pv_controller.go:1404] doDeleteVolume [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]
I0904 20:29:08.589655       1 pv_controller.go:1259] deletion of volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) since it's in attaching or detaching state
I0904 20:29:08.589706       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: set phase Failed
I0904 20:29:08.589730       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: phase Failed already set
E0904 20:29:08.589788       1 goroutinemap.go:150] Operation for "delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]" failed. No retries permitted until 2022-09-04 20:29:09.589739983 +0000 UTC m=+282.037288051 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) since it's in attaching or detaching state
I0904 20:29:08.981613       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:29:10.547375       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="91.602µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:52020" resp=200
I0904 20:29:16.215252       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) returned with <nil>
I0904 20:29:16.215322       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2) succeeded
I0904 20:29:16.215335       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2 was detached from node:capz-ynxxeg-mp-0000000
I0904 20:29:16.215359       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2") on node "capz-ynxxeg-mp-0000000" 
I0904 20:29:18.635041       1 gc_controller.go:161] GC'ing orphaned
I0904 20:29:18.635081       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:29:20.547652       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="98.902µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54522" resp=200
I0904 20:29:20.741889       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000000"
I0904 20:29:23.568269       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:29:23.585553       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:29:23.585970       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" with version 1176
I0904 20:29:23.586018       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: phase: Failed, bound to: "azuredisk-8081/pvc-57n6s (uid: e9603c6d-5db3-4175-a6b7-5a998d0248f2)", boundByController: true
I0904 20:29:23.586069       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is bound to claim azuredisk-8081/pvc-57n6s
I0904 20:29:23.586090       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: claim azuredisk-8081/pvc-57n6s not found
I0904 20:29:23.586101       1 pv_controller.go:1108] reclaimVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: policy is Delete
I0904 20:29:23.586148       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]]
I0904 20:29:23.586182       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2] started
I0904 20:29:23.591379       1 pv_controller.go:1340] isVolumeReleased[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is released
I0904 20:29:23.591397       1 pv_controller.go:1404] doDeleteVolume [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]
I0904 20:29:23.681900       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:29:28.894928       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2
I0904 20:29:28.895001       1 pv_controller.go:1435] volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" deleted
I0904 20:29:28.895015       1 pv_controller.go:1283] deleteVolumeOperation [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: success
I0904 20:29:28.902059       1 pv_protection_controller.go:205] Got event on PV pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2
I0904 20:29:28.902093       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2" with version 1226
I0904 20:29:28.902363       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: phase: Failed, bound to: "azuredisk-8081/pvc-57n6s (uid: e9603c6d-5db3-4175-a6b7-5a998d0248f2)", boundByController: true
I0904 20:29:28.902420       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: volume is bound to claim azuredisk-8081/pvc-57n6s
I0904 20:29:28.902448       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: claim azuredisk-8081/pvc-57n6s not found
I0904 20:29:28.902462       1 pv_controller.go:1108] reclaimVolume[pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2]: policy is Delete
I0904 20:29:28.902524       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2[577d92f4-46ca-4230-a49e-cb0cf889eda2]]
I0904 20:29:28.902576       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2] started
I0904 20:29:28.902792       1 pv_protection_controller.go:125] Processing PV pvc-e9603c6d-5db3-4175-a6b7-5a998d0248f2
... skipping 52 lines ...
I0904 20:29:38.454919       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-b6pp7.1711c243993d7bbc, uid 8f47172c-6fde-4efb-9155-46689a773a2b, event type delete
I0904 20:29:38.457288       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name pvc-57n6s.1711c23eee022aac, uid cc3358c9-723f-434a-8f5b-eface27bb084, event type delete
I0904 20:29:38.460382       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name pvc-57n6s.1711c23f94564759, uid 1a7e9dbb-e21d-4f3b-aecd-8696d32d39c9, event type delete
I0904 20:29:38.472139       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-qjg9m, uid f86e76bd-778c-4ade-9a6a-13096703e3a4, event type delete
I0904 20:29:38.480520       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8081, name default, uid 419032a7-9985-420b-8559-7ad3696f9580, event type delete
I0904 20:29:38.480478       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (3µs)
E0904 20:29:38.482466       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: serviceaccounts "default" not found
I0904 20:29:38.482685       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0904 20:29:38.488070       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0904 20:29:38.534496       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8081, name kube-root-ca.crt, uid 77580d3d-3489-45f8-bc81-b7edd4f071d9, event type delete
I0904 20:29:38.537353       1 publisher.go:186] Finished syncing namespace "azuredisk-8081" (2.621837ms)
I0904 20:29:38.558164       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:29:38.564906       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.5µs)
... skipping 98 lines ...
I0904 20:29:41.091823       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-ynxxeg-mp-0000000, refreshing the cache
I0904 20:29:41.229589       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" to node "capz-ynxxeg-mp-0000000".
I0904 20:29:41.272293       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" lun 0 to node "capz-ynxxeg-mp-0000000".
I0904 20:29:41.272334       1 azure_controller_vmss.go:101] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000000) - attach disk(capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) with DiskEncryptionSetID()
I0904 20:29:41.422833       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0904 20:29:41.481836       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-bldgv, uid 46a3e51b-58b5-4a17-b326-ca1d82448206, event type delete
E0904 20:29:41.493711       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-9v9ng" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0904 20:29:41.513112       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0904 20:29:41.513267       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid 286087e6-d9ba-459a-8bfd-c54baffcb938, event type delete
I0904 20:29:41.513243       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.8µs)
I0904 20:29:41.525771       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid 851c7710-b3ff-4063-9106-3f229eea84fd, event type delete
I0904 20:29:41.529281       1 publisher.go:186] Finished syncing namespace "azuredisk-4728" (3.464803ms)
I0904 20:29:41.561718       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 164 items received
... skipping 350 lines ...
I0904 20:31:45.307294       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: claim azuredisk-5466/pvc-2j5rr not found
I0904 20:31:45.307396       1 pv_controller.go:1108] reclaimVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: policy is Delete
I0904 20:31:45.307440       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]]
I0904 20:31:45.307500       1 pv_controller.go:1763] operation "delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]" is already running, skipping
I0904 20:31:45.308921       1 pv_controller.go:1340] isVolumeReleased[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is released
I0904 20:31:45.308939       1 pv_controller.go:1404] doDeleteVolume [pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]
I0904 20:31:45.335737       1 pv_controller.go:1259] deletion of volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:31:45.335755       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: set phase Failed
I0904 20:31:45.335764       1 pv_controller.go:858] updating PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: set phase Failed
I0904 20:31:45.338608       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" with version 1494
I0904 20:31:45.338783       1 pv_controller.go:879] volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" entered phase "Failed"
I0904 20:31:45.338983       1 pv_controller.go:901] volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:31:45.338684       1 pv_protection_controller.go:205] Got event on PV pvc-b0e68e54-5338-40f1-a5a6-0166c490b602
I0904 20:31:45.338700       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" with version 1494
I0904 20:31:45.339102       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: phase: Failed, bound to: "azuredisk-5466/pvc-2j5rr (uid: b0e68e54-5338-40f1-a5a6-0166c490b602)", boundByController: true
I0904 20:31:45.339174       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is bound to claim azuredisk-5466/pvc-2j5rr
I0904 20:31:45.339245       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: claim azuredisk-5466/pvc-2j5rr not found
I0904 20:31:45.339257       1 pv_controller.go:1108] reclaimVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: policy is Delete
I0904 20:31:45.339282       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]]
I0904 20:31:45.339561       1 event.go:291] "Event occurred" object="pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted"
E0904 20:31:45.339618       1 goroutinemap.go:150] Operation for "delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]" failed. No retries permitted until 2022-09-04 20:31:45.839011544 +0000 UTC m=+438.286559512 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:31:45.339660       1 pv_controller.go:1765] operation "delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]" postponed due to exponential backoff
I0904 20:31:50.546488       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="57.501µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58038" resp=200
I0904 20:31:50.874867       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000000"
I0904 20:31:50.874933       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 to the node "capz-ynxxeg-mp-0000000" mounted false
I0904 20:31:50.891421       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000000"
I0904 20:31:50.891587       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602 to the node "capz-ynxxeg-mp-0000000" mounted false
... skipping 6 lines ...
I0904 20:31:51.543152       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 13 items received
I0904 20:31:51.642798       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:31:52.525456       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 14 items received
I0904 20:31:53.572302       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:31:53.594489       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:31:53.594548       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" with version 1494
I0904 20:31:53.594589       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: phase: Failed, bound to: "azuredisk-5466/pvc-2j5rr (uid: b0e68e54-5338-40f1-a5a6-0166c490b602)", boundByController: true
I0904 20:31:53.594623       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is bound to claim azuredisk-5466/pvc-2j5rr
I0904 20:31:53.594645       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: claim azuredisk-5466/pvc-2j5rr not found
I0904 20:31:53.594659       1 pv_controller.go:1108] reclaimVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: policy is Delete
I0904 20:31:53.594675       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]]
I0904 20:31:53.594707       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b0e68e54-5338-40f1-a5a6-0166c490b602] started
I0904 20:31:53.599374       1 pv_controller.go:1340] isVolumeReleased[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is released
I0904 20:31:53.599390       1 pv_controller.go:1404] doDeleteVolume [pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]
I0904 20:31:53.599529       1 pv_controller.go:1259] deletion of volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) since it's in attaching or detaching state
I0904 20:31:53.599545       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: set phase Failed
I0904 20:31:53.599555       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: phase Failed already set
E0904 20:31:53.599658       1 goroutinemap.go:150] Operation for "delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]" failed. No retries permitted until 2022-09-04 20:31:54.59962846 +0000 UTC m=+447.047176528 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602) since it's in attaching or detaching state
I0904 20:31:53.708186       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:31:53.708427       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:31:54.544242       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 12 items received
I0904 20:31:58.641954       1 gc_controller.go:161] GC'ing orphaned
I0904 20:31:58.642389       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:32:00.548608       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="91.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58712" resp=200
... skipping 5 lines ...
I0904 20:32:06.592895       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 60 items received
I0904 20:32:08.540090       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 12 items received
I0904 20:32:08.571907       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:08.572992       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:08.595126       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:08.595201       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" with version 1494
I0904 20:32:08.595238       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: phase: Failed, bound to: "azuredisk-5466/pvc-2j5rr (uid: b0e68e54-5338-40f1-a5a6-0166c490b602)", boundByController: true
I0904 20:32:08.595334       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is bound to claim azuredisk-5466/pvc-2j5rr
I0904 20:32:08.595381       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: claim azuredisk-5466/pvc-2j5rr not found
I0904 20:32:08.595414       1 pv_controller.go:1108] reclaimVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: policy is Delete
I0904 20:32:08.595432       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]]
I0904 20:32:08.595594       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b0e68e54-5338-40f1-a5a6-0166c490b602] started
I0904 20:32:08.598030       1 pv_controller.go:1340] isVolumeReleased[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is released
... skipping 4 lines ...
I0904 20:32:13.877505       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602
I0904 20:32:13.877597       1 pv_controller.go:1435] volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" deleted
I0904 20:32:13.877665       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: success
I0904 20:32:13.884773       1 pv_protection_controller.go:205] Got event on PV pvc-b0e68e54-5338-40f1-a5a6-0166c490b602
I0904 20:32:13.884903       1 pv_protection_controller.go:125] Processing PV pvc-b0e68e54-5338-40f1-a5a6-0166c490b602
I0904 20:32:13.885286       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" with version 1539
I0904 20:32:13.885502       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: phase: Failed, bound to: "azuredisk-5466/pvc-2j5rr (uid: b0e68e54-5338-40f1-a5a6-0166c490b602)", boundByController: true
I0904 20:32:13.885665       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: volume is bound to claim azuredisk-5466/pvc-2j5rr
I0904 20:32:13.885803       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: claim azuredisk-5466/pvc-2j5rr not found
I0904 20:32:13.885900       1 pv_controller.go:1108] reclaimVolume[pvc-b0e68e54-5338-40f1-a5a6-0166c490b602]: policy is Delete
I0904 20:32:13.886035       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b0e68e54-5338-40f1-a5a6-0166c490b602[ad9f7a6b-bfdc-4398-86f4-dff9b09b488f]]
I0904 20:32:13.886160       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b0e68e54-5338-40f1-a5a6-0166c490b602] started
I0904 20:32:13.892229       1 pv_controller.go:1243] Volume "pvc-b0e68e54-5338-40f1-a5a6-0166c490b602" is already being deleted
... skipping 107 lines ...
I0904 20:32:23.214098       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-v9t5d, PodDisruptionBudget controller will avoid syncing.
I0904 20:32:23.214232       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-v9t5d"
I0904 20:32:23.235688       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" lun 0 to node "capz-ynxxeg-mp-0000001".
I0904 20:32:23.235745       1 azure_controller_vmss.go:101] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - attach disk(capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) with DiskEncryptionSetID()
I0904 20:32:23.480845       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5466
I0904 20:32:23.508564       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-rghlw, uid 65e9f950-5f54-4484-a32a-596437de9cdf, event type delete
E0904 20:32:23.529915       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-7jkm4" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0904 20:32:23.534859       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-mhr5r.1711c24ec2787811, uid 45574225-bc30-4541-9f69-f5751066f66b, event type delete
I0904 20:32:23.537447       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-mhr5r.1711c25137babc17, uid 1b2e8d22-b14a-40a1-a61b-0f8e9262c1cb, event type delete
I0904 20:32:23.541135       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-mhr5r.1711c251c83622d7, uid dfdafa12-66af-4056-9dcb-2231b4b7b544, event type delete
I0904 20:32:23.547158       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-mhr5r.1711c2525b341f03, uid e76b5443-bec6-4e56-8e1e-d474f9c9471d, event type delete
I0904 20:32:23.550617       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-mhr5r.1711c252faae002a, uid 648e95a5-ce35-415b-b161-ec54fe41374d, event type delete
I0904 20:32:23.553632       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-mhr5r.1711c253d23b6b78, uid 155b337e-fa01-4523-aa69-dbac722e424c, event type delete
... skipping 172 lines ...
I0904 20:32:41.988773       1 pv_controller.go:1108] reclaimVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: policy is Delete
I0904 20:32:41.988781       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]]
I0904 20:32:41.988788       1 pv_controller.go:1763] operation "delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]" is already running, skipping
I0904 20:32:41.988812       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7] started
I0904 20:32:41.990219       1 pv_controller.go:1340] isVolumeReleased[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is released
I0904 20:32:41.990236       1 pv_controller.go:1404] doDeleteVolume [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]
I0904 20:32:41.990268       1 pv_controller.go:1259] deletion of volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) since it's in attaching or detaching state
I0904 20:32:41.990349       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: set phase Failed
I0904 20:32:41.990425       1 pv_controller.go:858] updating PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: set phase Failed
I0904 20:32:41.993295       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" with version 1645
I0904 20:32:41.993319       1 pv_controller.go:879] volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" entered phase "Failed"
I0904 20:32:41.993437       1 pv_controller.go:901] volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) since it's in attaching or detaching state
E0904 20:32:41.993573       1 goroutinemap.go:150] Operation for "delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]" failed. No retries permitted until 2022-09-04 20:32:42.493538655 +0000 UTC m=+494.941086723 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) since it's in attaching or detaching state
I0904 20:32:41.993809       1 event.go:291] "Event occurred" object="pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) since it's in attaching or detaching state"
I0904 20:32:41.993921       1 pv_protection_controller.go:205] Got event on PV pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7
I0904 20:32:41.993942       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" with version 1645
I0904 20:32:41.993995       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: phase: Failed, bound to: "azuredisk-2790/pvc-gnmxx (uid: 1ee6249a-62db-4f79-ba1d-9b19b706b1f7)", boundByController: true
I0904 20:32:41.994055       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is bound to claim azuredisk-2790/pvc-gnmxx
I0904 20:32:41.994145       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: claim azuredisk-2790/pvc-gnmxx not found
I0904 20:32:41.994180       1 pv_controller.go:1108] reclaimVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: policy is Delete
I0904 20:32:41.994232       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]]
I0904 20:32:41.994249       1 pv_controller.go:1765] operation "delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]" postponed due to exponential backoff
I0904 20:32:43.716743       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:32:50.544183       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0904 20:32:50.555745       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="80.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41506" resp=200
I0904 20:32:51.725864       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:32:51.725895       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7 to the node "capz-ynxxeg-mp-0000001" mounted false
I0904 20:32:53.574485       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:53.597641       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:53.597878       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" with version 1645
I0904 20:32:53.597995       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: phase: Failed, bound to: "azuredisk-2790/pvc-gnmxx (uid: 1ee6249a-62db-4f79-ba1d-9b19b706b1f7)", boundByController: true
I0904 20:32:53.598084       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is bound to claim azuredisk-2790/pvc-gnmxx
I0904 20:32:53.598109       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: claim azuredisk-2790/pvc-gnmxx not found
I0904 20:32:53.598118       1 pv_controller.go:1108] reclaimVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: policy is Delete
I0904 20:32:53.598135       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]]
I0904 20:32:53.598192       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7] started
I0904 20:32:53.603938       1 pv_controller.go:1340] isVolumeReleased[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is released
I0904 20:32:53.603960       1 pv_controller.go:1404] doDeleteVolume [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]
I0904 20:32:53.604014       1 pv_controller.go:1259] deletion of volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) since it's in attaching or detaching state
I0904 20:32:53.604040       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: set phase Failed
I0904 20:32:53.604050       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: phase Failed already set
E0904 20:32:53.604120       1 goroutinemap.go:150] Operation for "delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]" failed. No retries permitted until 2022-09-04 20:32:54.604088134 +0000 UTC m=+507.051636102 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) since it's in attaching or detaching state
I0904 20:32:53.718650       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:32:55.820716       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:32:56.068802       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0904 20:32:56.543285       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0904 20:32:56.836673       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:32:57.087007       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7) returned with <nil>
... skipping 15 lines ...
I0904 20:33:07.523521       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0904 20:33:08.256078       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0904 20:33:08.572505       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:33:08.574577       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:33:08.598754       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:33:08.598811       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" with version 1645
I0904 20:33:08.598892       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: phase: Failed, bound to: "azuredisk-2790/pvc-gnmxx (uid: 1ee6249a-62db-4f79-ba1d-9b19b706b1f7)", boundByController: true
I0904 20:33:08.598933       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is bound to claim azuredisk-2790/pvc-gnmxx
I0904 20:33:08.598983       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: claim azuredisk-2790/pvc-gnmxx not found
I0904 20:33:08.598996       1 pv_controller.go:1108] reclaimVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: policy is Delete
I0904 20:33:08.599013       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]]
I0904 20:33:08.599080       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7] started
I0904 20:33:08.602244       1 pv_controller.go:1340] isVolumeReleased[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is released
... skipping 4 lines ...
I0904 20:33:13.920630       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7
I0904 20:33:13.920660       1 pv_controller.go:1435] volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" deleted
I0904 20:33:13.920671       1 pv_controller.go:1283] deleteVolumeOperation [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: success
I0904 20:33:13.929792       1 pv_protection_controller.go:205] Got event on PV pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7
I0904 20:33:13.929821       1 pv_protection_controller.go:125] Processing PV pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7
I0904 20:33:13.930091       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" with version 1693
I0904 20:33:13.930129       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: phase: Failed, bound to: "azuredisk-2790/pvc-gnmxx (uid: 1ee6249a-62db-4f79-ba1d-9b19b706b1f7)", boundByController: true
I0904 20:33:13.930153       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: volume is bound to claim azuredisk-2790/pvc-gnmxx
I0904 20:33:13.930176       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: claim azuredisk-2790/pvc-gnmxx not found
I0904 20:33:13.930186       1 pv_controller.go:1108] reclaimVolume[pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7]: policy is Delete
I0904 20:33:13.930199       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7[4278d01d-38c0-456c-ba70-dae90accde9c]]
I0904 20:33:13.930221       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7] started
I0904 20:33:13.934440       1 pv_controller.go:1243] Volume "pvc-1ee6249a-62db-4f79-ba1d-9b19b706b1f7" is already being deleted
... skipping 125 lines ...
I0904 20:33:23.189940       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" lun 0 to node "capz-ynxxeg-mp-0000001".
I0904 20:33:23.189995       1 azure_controller_vmss.go:101] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - attach disk(capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) with DiskEncryptionSetID()
I0904 20:33:23.356780       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0904 20:33:23.374327       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2790, name kube-root-ca.crt, uid 7c79642a-5b4a-444f-b32a-4b8f42575045, event type delete
I0904 20:33:23.376258       1 publisher.go:186] Finished syncing namespace "azuredisk-2790" (1.784225ms)
I0904 20:33:23.394262       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-6f9mq, uid 6408f7ce-1095-4682-a465-8dba60059e4d, event type delete
E0904 20:33:23.407381       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-cmfq6" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0904 20:33:23.412269       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
I0904 20:33:23.412310       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (3µs)
I0904 20:33:23.412434       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2790, name default, uid 33148aaa-0a5d-4f9a-8652-d080c9af8074, event type delete
I0904 20:33:23.457070       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-v9t5d.1711c274862e4443, uid a2a316a1-de64-4c76-98d2-57a59eda7c0a, event type delete
I0904 20:33:23.460065       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-v9t5d.1711c276ec43cc68, uid 60bd2f37-da87-45f3-bd18-9b2696b41fa9, event type delete
I0904 20:33:23.463015       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-v9t5d.1711c27761def265, uid 1d5a0c01-8eb3-4624-971b-15261fe43092, event type delete
... skipping 163 lines ...
I0904 20:33:41.906994       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: claim azuredisk-5356/pvc-lnx62 not found
I0904 20:33:41.907002       1 pv_controller.go:1108] reclaimVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: policy is Delete
I0904 20:33:41.907015       1 pv_controller.go:1752] scheduleOperation[delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]]
I0904 20:33:41.907026       1 pv_controller.go:1763] operation "delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]" is already running, skipping
I0904 20:33:41.910408       1 pv_controller.go:1340] isVolumeReleased[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is released
I0904 20:33:41.910425       1 pv_controller.go:1404] doDeleteVolume [pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]
I0904 20:33:41.910473       1 pv_controller.go:1259] deletion of volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) since it's in attaching or detaching state
I0904 20:33:41.910489       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: set phase Failed
I0904 20:33:41.910498       1 pv_controller.go:858] updating PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: set phase Failed
I0904 20:33:41.912672       1 pv_protection_controller.go:205] Got event on PV pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa
I0904 20:33:41.913097       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" with version 1793
I0904 20:33:41.913300       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: phase: Failed, bound to: "azuredisk-5356/pvc-lnx62 (uid: 054c2451-1dc3-47e7-a73b-736e2a7b49aa)", boundByController: true
I0904 20:33:41.913488       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is bound to claim azuredisk-5356/pvc-lnx62
I0904 20:33:41.913633       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: claim azuredisk-5356/pvc-lnx62 not found
I0904 20:33:41.913800       1 pv_controller.go:1108] reclaimVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: policy is Delete
I0904 20:33:41.913959       1 pv_controller.go:1752] scheduleOperation[delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]]
I0904 20:33:41.914148       1 pv_controller.go:1763] operation "delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]" is already running, skipping
I0904 20:33:41.913800       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" with version 1793
I0904 20:33:41.914338       1 pv_controller.go:879] volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" entered phase "Failed"
I0904 20:33:41.914350       1 pv_controller.go:901] volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) since it's in attaching or detaching state
E0904 20:33:41.914404       1 goroutinemap.go:150] Operation for "delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]" failed. No retries permitted until 2022-09-04 20:33:42.41438326 +0000 UTC m=+554.861931328 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) since it's in attaching or detaching state
I0904 20:33:41.914620       1 event.go:291] "Event occurred" object="pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) since it's in attaching or detaching state"
I0904 20:33:43.725712       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:33:46.557263       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 20 items received
I0904 20:33:46.571509       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 22 items received
I0904 20:33:49.570361       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0904 20:33:50.556070       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:42654" resp=200
I0904 20:33:51.543071       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0904 20:33:51.608254       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 11 items received
I0904 20:33:53.576083       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:33:53.601468       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:33:53.601909       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" with version 1793
I0904 20:33:53.601963       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: phase: Failed, bound to: "azuredisk-5356/pvc-lnx62 (uid: 054c2451-1dc3-47e7-a73b-736e2a7b49aa)", boundByController: true
I0904 20:33:53.601995       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is bound to claim azuredisk-5356/pvc-lnx62
I0904 20:33:53.602049       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: claim azuredisk-5356/pvc-lnx62 not found
I0904 20:33:53.602059       1 pv_controller.go:1108] reclaimVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: policy is Delete
I0904 20:33:53.602100       1 pv_controller.go:1752] scheduleOperation[delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]]
I0904 20:33:53.602134       1 pv_controller.go:1231] deleteVolumeOperation [pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa] started
I0904 20:33:53.606511       1 pv_controller.go:1340] isVolumeReleased[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is released
I0904 20:33:53.606531       1 pv_controller.go:1404] doDeleteVolume [pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]
I0904 20:33:53.606606       1 pv_controller.go:1259] deletion of volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) since it's in attaching or detaching state
I0904 20:33:53.606621       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: set phase Failed
I0904 20:33:53.606674       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: phase Failed already set
E0904 20:33:53.606712       1 goroutinemap.go:150] Operation for "delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]" failed. No retries permitted until 2022-09-04 20:33:54.606687245 +0000 UTC m=+567.054235313 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) since it's in attaching or detaching state
I0904 20:33:57.132872       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) returned with <nil>
I0904 20:33:57.132923       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa) succeeded
I0904 20:33:57.133203       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa was detached from node:capz-ynxxeg-mp-0000001
I0904 20:33:57.133235       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa") on node "capz-ynxxeg-mp-0000001" 
I0904 20:33:58.646668       1 gc_controller.go:161] GC'ing orphaned
I0904 20:33:58.646700       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:34:00.548252       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="116.402µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54578" resp=200
I0904 20:34:03.974865       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 19 items received
I0904 20:34:08.573670       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:34:08.576890       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:34:08.601977       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:34:08.602064       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" with version 1793
I0904 20:34:08.602139       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: phase: Failed, bound to: "azuredisk-5356/pvc-lnx62 (uid: 054c2451-1dc3-47e7-a73b-736e2a7b49aa)", boundByController: true
I0904 20:34:08.602203       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is bound to claim azuredisk-5356/pvc-lnx62
I0904 20:34:08.602240       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: claim azuredisk-5356/pvc-lnx62 not found
I0904 20:34:08.602249       1 pv_controller.go:1108] reclaimVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: policy is Delete
I0904 20:34:08.602293       1 pv_controller.go:1752] scheduleOperation[delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]]
I0904 20:34:08.602346       1 pv_controller.go:1231] deleteVolumeOperation [pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa] started
I0904 20:34:08.604591       1 pv_controller.go:1340] isVolumeReleased[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is released
... skipping 5 lines ...
I0904 20:34:13.879118       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa
I0904 20:34:13.879148       1 pv_controller.go:1435] volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" deleted
I0904 20:34:13.879161       1 pv_controller.go:1283] deleteVolumeOperation [pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: success
I0904 20:34:13.883698       1 pv_protection_controller.go:205] Got event on PV pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa
I0904 20:34:13.883977       1 pv_protection_controller.go:125] Processing PV pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa
I0904 20:34:13.884269       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" with version 1840
I0904 20:34:13.884324       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: phase: Failed, bound to: "azuredisk-5356/pvc-lnx62 (uid: 054c2451-1dc3-47e7-a73b-736e2a7b49aa)", boundByController: true
I0904 20:34:13.884354       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: volume is bound to claim azuredisk-5356/pvc-lnx62
I0904 20:34:13.884373       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: claim azuredisk-5356/pvc-lnx62 not found
I0904 20:34:13.884384       1 pv_controller.go:1108] reclaimVolume[pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa]: policy is Delete
I0904 20:34:13.884397       1 pv_controller.go:1752] scheduleOperation[delete-pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa[e317bd1b-fa70-4013-9bba-5d766ba46ae1]]
I0904 20:34:13.884419       1 pv_controller.go:1231] deleteVolumeOperation [pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa] started
I0904 20:34:13.888879       1 pv_controller.go:1243] Volume "pvc-054c2451-1dc3-47e7-a73b-736e2a7b49aa" is already being deleted
... skipping 115 lines ...
I0904 20:34:23.312668       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-mt962.1711c28589a5f89c, uid d6e331b1-7fc8-420e-b009-c998855619a5, event type delete
I0904 20:34:23.317365       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-mt962.1711c2858c18a9b7, uid db86308a-e665-4855-b6b7-837248aeea05, event type delete
I0904 20:34:23.322665       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-mt962.1711c28591f6f95a, uid 3b851ac8-6c74-48ab-80b3-d381f7b7c435, event type delete
I0904 20:34:23.326469       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name pvc-lnx62.1711c281b89a30e0, uid 25b9e9e6-a6f4-4d9a-a4b4-bbfc39929d68, event type delete
I0904 20:34:23.329225       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name pvc-lnx62.1711c2825031a62d, uid 31edd577-a265-424a-b7fb-437230fe093f, event type delete
I0904 20:34:23.348304       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-7f6m5, uid 8a697658-f200-4017-a31b-01c307c8aeb1, event type delete
E0904 20:34:23.360124       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-4nszc" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0904 20:34:23.409190       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0904 20:34:23.409227       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (2.5µs)
I0904 20:34:23.409282       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid bb51f393-f10c-48dd-b686-fe4970851fb1, event type delete
I0904 20:34:23.429999       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid 5ed55f70-f63c-4d7c-93c5-7f55602d1bf2, event type delete
I0904 20:34:23.434953       1 publisher.go:186] Finished syncing namespace "azuredisk-5356" (4.837866ms)
I0904 20:34:23.455928       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5356, estimate: 0, errors: <nil>
... skipping 694 lines ...
I0904 20:35:51.686604       1 pv_controller.go:1108] reclaimVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: policy is Delete
I0904 20:35:51.686612       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]]
I0904 20:35:51.686619       1 pv_controller.go:1763] operation "delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]" is already running, skipping
I0904 20:35:51.686642       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3] started
I0904 20:35:51.689359       1 pv_controller.go:1340] isVolumeReleased[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is released
I0904 20:35:51.689377       1 pv_controller.go:1404] doDeleteVolume [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]
I0904 20:35:51.728394       1 pv_controller.go:1259] deletion of volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:35:51.728414       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: set phase Failed
I0904 20:35:51.728423       1 pv_controller.go:858] updating PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: set phase Failed
I0904 20:35:51.732815       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" with version 2083
I0904 20:35:51.732914       1 pv_controller.go:879] volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" entered phase "Failed"
I0904 20:35:51.732927       1 pv_controller.go:901] volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
E0904 20:35:51.732986       1 goroutinemap.go:150] Operation for "delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]" failed. No retries permitted until 2022-09-04 20:35:52.232947244 +0000 UTC m=+684.680495212 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:35:51.732845       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" with version 2083
I0904 20:35:51.733260       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase: Failed, bound to: "azuredisk-5194/pvc-zfps8 (uid: f946e1c0-4dcd-4951-809f-6ece435067d3)", boundByController: true
I0904 20:35:51.733426       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is bound to claim azuredisk-5194/pvc-zfps8
I0904 20:35:51.733575       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: claim azuredisk-5194/pvc-zfps8 not found
I0904 20:35:51.733739       1 pv_controller.go:1108] reclaimVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: policy is Delete
I0904 20:35:51.733914       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]]
I0904 20:35:51.732830       1 pv_protection_controller.go:205] Got event on PV pvc-f946e1c0-4dcd-4951-809f-6ece435067d3
I0904 20:35:51.733341       1 event.go:291] "Event occurred" object="pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted"
... skipping 24 lines ...
I0904 20:35:53.606351       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:35:53.606386       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj found: phase: Bound, bound to: "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c", bindCompleted: true, boundByController: true
I0904 20:35:53.606400       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: all is bound
I0904 20:35:53.606407       1 pv_controller.go:858] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Bound
I0904 20:35:53.606415       1 pv_controller.go:861] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase Bound already set
I0904 20:35:53.606427       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" with version 2083
I0904 20:35:53.606446       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase: Failed, bound to: "azuredisk-5194/pvc-zfps8 (uid: f946e1c0-4dcd-4951-809f-6ece435067d3)", boundByController: true
I0904 20:35:53.606485       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is bound to claim azuredisk-5194/pvc-zfps8
I0904 20:35:53.606503       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: claim azuredisk-5194/pvc-zfps8 not found
I0904 20:35:53.606510       1 pv_controller.go:1108] reclaimVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: policy is Delete
I0904 20:35:53.606523       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]]
I0904 20:35:53.606568       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3] started
I0904 20:35:53.606839       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-l8mpx" with version 1870
... skipping 27 lines ...
I0904 20:35:53.607447       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-b8svj] status: phase Bound already set
I0904 20:35:53.607525       1 pv_controller.go:1038] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" bound to claim "azuredisk-5194/pvc-b8svj"
I0904 20:35:53.607605       1 pv_controller.go:1039] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:35:53.607683       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-b8svj" status after binding: phase: Bound, bound to: "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c", bindCompleted: true, boundByController: true
I0904 20:35:53.612824       1 pv_controller.go:1340] isVolumeReleased[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is released
I0904 20:35:53.612842       1 pv_controller.go:1404] doDeleteVolume [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]
I0904 20:35:53.612950       1 pv_controller.go:1259] deletion of volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) since it's in attaching or detaching state
I0904 20:35:53.612966       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: set phase Failed
I0904 20:35:53.612975       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase Failed already set
E0904 20:35:53.613075       1 goroutinemap.go:150] Operation for "delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]" failed. No retries permitted until 2022-09-04 20:35:54.612983547 +0000 UTC m=+687.060531615 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) since it's in attaching or detaching state
I0904 20:35:53.743592       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-control-plane-lcqxk ReadyCondition updated. Updating timestamp.
I0904 20:35:53.743816       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:35:58.651281       1 gc_controller.go:161] GC'ing orphaned
I0904 20:35:58.651352       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:36:00.546522       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36786" resp=200
I0904 20:36:02.560134       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
... skipping 43 lines ...
I0904 20:36:08.608380       1 pv_controller.go:1039] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:36:08.608164       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj found: phase: Bound, bound to: "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c", bindCompleted: true, boundByController: true
I0904 20:36:08.608428       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: all is bound
I0904 20:36:08.608436       1 pv_controller.go:858] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Bound
I0904 20:36:08.608446       1 pv_controller.go:861] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase Bound already set
I0904 20:36:08.608462       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" with version 2083
I0904 20:36:08.608509       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase: Failed, bound to: "azuredisk-5194/pvc-zfps8 (uid: f946e1c0-4dcd-4951-809f-6ece435067d3)", boundByController: true
I0904 20:36:08.608531       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is bound to claim azuredisk-5194/pvc-zfps8
I0904 20:36:08.608550       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: claim azuredisk-5194/pvc-zfps8 not found
I0904 20:36:08.608589       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-b8svj" status after binding: phase: Bound, bound to: "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c", bindCompleted: true, boundByController: true
I0904 20:36:08.608559       1 pv_controller.go:1108] reclaimVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: policy is Delete
I0904 20:36:08.608622       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]]
I0904 20:36:08.608687       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3] started
I0904 20:36:08.616602       1 pv_controller.go:1340] isVolumeReleased[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is released
I0904 20:36:08.616622       1 pv_controller.go:1404] doDeleteVolume [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]
I0904 20:36:08.616675       1 pv_controller.go:1259] deletion of volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) since it's in attaching or detaching state
I0904 20:36:08.616693       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: set phase Failed
I0904 20:36:08.616700       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase Failed already set
E0904 20:36:08.616807       1 goroutinemap.go:150] Operation for "delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]" failed. No retries permitted until 2022-09-04 20:36:10.616752489 +0000 UTC m=+703.064300457 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) since it's in attaching or detaching state
I0904 20:36:09.162415       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:36:10.548693       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="78.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36480" resp=200
I0904 20:36:14.242870       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:36:17.335766       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) returned with <nil>
I0904 20:36:17.335816       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3) succeeded
I0904 20:36:17.335827       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3 was detached from node:capz-ynxxeg-mp-0000001
... skipping 52 lines ...
I0904 20:36:23.610215       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:36:23.610232       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj found: phase: Bound, bound to: "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c", bindCompleted: true, boundByController: true
I0904 20:36:23.610278       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: all is bound
I0904 20:36:23.610289       1 pv_controller.go:858] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Bound
I0904 20:36:23.610299       1 pv_controller.go:861] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase Bound already set
I0904 20:36:23.610312       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" with version 2083
I0904 20:36:23.610389       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase: Failed, bound to: "azuredisk-5194/pvc-zfps8 (uid: f946e1c0-4dcd-4951-809f-6ece435067d3)", boundByController: true
I0904 20:36:23.610446       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is bound to claim azuredisk-5194/pvc-zfps8
I0904 20:36:23.610469       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: claim azuredisk-5194/pvc-zfps8 not found
I0904 20:36:23.610482       1 pv_controller.go:1108] reclaimVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: policy is Delete
I0904 20:36:23.610531       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]]
I0904 20:36:23.610577       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3] started
I0904 20:36:23.616699       1 pv_controller.go:1340] isVolumeReleased[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is released
... skipping 3 lines ...
I0904 20:36:28.825643       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3
I0904 20:36:28.825788       1 pv_controller.go:1435] volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" deleted
I0904 20:36:28.825866       1 pv_controller.go:1283] deleteVolumeOperation [pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: success
I0904 20:36:28.830718       1 pv_protection_controller.go:205] Got event on PV pvc-f946e1c0-4dcd-4951-809f-6ece435067d3
I0904 20:36:28.830845       1 pv_protection_controller.go:125] Processing PV pvc-f946e1c0-4dcd-4951-809f-6ece435067d3
I0904 20:36:28.831806       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f946e1c0-4dcd-4951-809f-6ece435067d3" with version 2140
I0904 20:36:28.831980       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: phase: Failed, bound to: "azuredisk-5194/pvc-zfps8 (uid: f946e1c0-4dcd-4951-809f-6ece435067d3)", boundByController: true
I0904 20:36:28.832371       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: volume is bound to claim azuredisk-5194/pvc-zfps8
I0904 20:36:28.832506       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: claim azuredisk-5194/pvc-zfps8 not found
I0904 20:36:28.832599       1 pv_controller.go:1108] reclaimVolume[pvc-f946e1c0-4dcd-4951-809f-6ece435067d3]: policy is Delete
I0904 20:36:28.832702       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]]
I0904 20:36:28.832885       1 pv_controller.go:1763] operation "delete-pvc-f946e1c0-4dcd-4951-809f-6ece435067d3[54777186-d4df-42b7-a398-ea057bf1983b]" is already running, skipping
I0904 20:36:28.853465       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-f946e1c0-4dcd-4951-809f-6ece435067d3
... skipping 191 lines ...
I0904 20:37:03.074143       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]]
I0904 20:37:03.074149       1 pv_controller.go:1763] operation "delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]" is already running, skipping
I0904 20:37:03.074197       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c] started
I0904 20:37:03.073859       1 pv_protection_controller.go:205] Got event on PV pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c
I0904 20:37:03.075683       1 pv_controller.go:1340] isVolumeReleased[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is released
I0904 20:37:03.075699       1 pv_controller.go:1404] doDeleteVolume [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]
I0904 20:37:03.108531       1 pv_controller.go:1259] deletion of volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:37:03.108553       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Failed
I0904 20:37:03.108560       1 pv_controller.go:858] updating PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Failed
I0904 20:37:03.111841       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" with version 2203
I0904 20:37:03.111994       1 pv_controller.go:879] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" entered phase "Failed"
I0904 20:37:03.112131       1 pv_controller.go:901] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
E0904 20:37:03.112294       1 goroutinemap.go:150] Operation for "delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]" failed. No retries permitted until 2022-09-04 20:37:03.612271452 +0000 UTC m=+756.059819520 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:37:03.111857       1 pv_protection_controller.go:205] Got event on PV pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c
I0904 20:37:03.111869       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" with version 2203
I0904 20:37:03.112899       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase: Failed, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:37:03.113026       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:37:03.113156       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj not found
I0904 20:37:03.113277       1 pv_controller.go:1108] reclaimVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: policy is Delete
I0904 20:37:03.113401       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]]
I0904 20:37:03.112462       1 event.go:291] "Event occurred" object="pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted"
I0904 20:37:03.113594       1 pv_controller.go:1765] operation "delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]" postponed due to exponential backoff
... skipping 12 lines ...
I0904 20:37:08.610291       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-l8mpx" with version 1870
I0904 20:37:08.610933       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-l8mpx]: phase: Bound, bound to: "pvc-d73a6518-8d09-4568-844a-6993ced4e017", bindCompleted: true, boundByController: true
I0904 20:37:08.610990       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-l8mpx]: volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" found: phase: Bound, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:37:08.611005       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-l8mpx]: claim is already correctly bound
I0904 20:37:08.611092       1 pv_controller.go:1012] binding volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" to claim "azuredisk-5194/pvc-l8mpx"
I0904 20:37:08.611111       1 pv_controller.go:910] updating PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: binding to "azuredisk-5194/pvc-l8mpx"
I0904 20:37:08.611031       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase: Failed, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:37:08.611200       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:37:08.611238       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj not found
I0904 20:37:08.611246       1 pv_controller.go:1108] reclaimVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: policy is Delete
I0904 20:37:08.611367       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]]
I0904 20:37:08.611336       1 pv_controller.go:922] updating PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: already bound to "azuredisk-5194/pvc-l8mpx"
I0904 20:37:08.611459       1 pv_controller.go:858] updating PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: set phase Bound
... skipping 5 lines ...
I0904 20:37:08.611633       1 pv_controller.go:1038] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" bound to claim "azuredisk-5194/pvc-l8mpx"
I0904 20:37:08.611656       1 pv_controller.go:1039] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:37:08.611673       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-l8mpx" status after binding: phase: Bound, bound to: "pvc-d73a6518-8d09-4568-844a-6993ced4e017", bindCompleted: true, boundByController: true
I0904 20:37:08.611532       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c] started
I0904 20:37:08.617975       1 pv_controller.go:1340] isVolumeReleased[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is released
I0904 20:37:08.617992       1 pv_controller.go:1404] doDeleteVolume [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]
I0904 20:37:08.647323       1 pv_controller.go:1259] deletion of volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:37:08.647400       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Failed
I0904 20:37:08.647412       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase Failed already set
E0904 20:37:08.647442       1 goroutinemap.go:150] Operation for "delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]" failed. No retries permitted until 2022-09-04 20:37:09.647422442 +0000 UTC m=+762.094970510 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:37:09.189411       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:37:10.547856       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="81.401µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37030" resp=200
I0904 20:37:11.988528       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:37:11.988675       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c to the node "capz-ynxxeg-mp-0000001" mounted false
I0904 20:37:12.041568       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:37:12.041723       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c to the node "capz-ynxxeg-mp-0000001" mounted false
... skipping 15 lines ...
I0904 20:37:23.611089       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is bound to claim azuredisk-5194/pvc-l8mpx
I0904 20:37:23.611136       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: claim azuredisk-5194/pvc-l8mpx found: phase: Bound, bound to: "pvc-d73a6518-8d09-4568-844a-6993ced4e017", bindCompleted: true, boundByController: true
I0904 20:37:23.611152       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: all is bound
I0904 20:37:23.611161       1 pv_controller.go:858] updating PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: set phase Bound
I0904 20:37:23.611171       1 pv_controller.go:861] updating PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase Bound already set
I0904 20:37:23.611211       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" with version 2203
I0904 20:37:23.611239       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase: Failed, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:37:23.611263       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:37:23.611340       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj not found
I0904 20:37:23.611355       1 pv_controller.go:1108] reclaimVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: policy is Delete
I0904 20:37:23.611373       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]]
I0904 20:37:23.611426       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c] started
I0904 20:37:23.611769       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-l8mpx" with version 1870
... skipping 11 lines ...
I0904 20:37:23.613296       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-l8mpx] status: phase Bound already set
I0904 20:37:23.613435       1 pv_controller.go:1038] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" bound to claim "azuredisk-5194/pvc-l8mpx"
I0904 20:37:23.613566       1 pv_controller.go:1039] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:37:23.613689       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-l8mpx" status after binding: phase: Bound, bound to: "pvc-d73a6518-8d09-4568-844a-6993ced4e017", bindCompleted: true, boundByController: true
I0904 20:37:23.618947       1 pv_controller.go:1340] isVolumeReleased[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is released
I0904 20:37:23.618965       1 pv_controller.go:1404] doDeleteVolume [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]
I0904 20:37:23.619079       1 pv_controller.go:1259] deletion of volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) since it's in attaching or detaching state
I0904 20:37:23.619096       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: set phase Failed
I0904 20:37:23.619105       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase Failed already set
E0904 20:37:23.619132       1 goroutinemap.go:150] Operation for "delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]" failed. No retries permitted until 2022-09-04 20:37:25.619113982 +0000 UTC m=+778.066662050 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) since it's in attaching or detaching state
I0904 20:37:27.517545       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) returned with <nil>
I0904 20:37:27.517593       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c) succeeded
I0904 20:37:27.517603       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c was detached from node:capz-ynxxeg-mp-0000001
I0904 20:37:27.517627       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c") on node "capz-ynxxeg-mp-0000001" 
I0904 20:37:28.168502       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 30 items received
I0904 20:37:28.379932       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:37:30.558930       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:53334" resp=200
I0904 20:37:38.340517       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:37:38.578889       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:37:38.587265       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:37:38.611814       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:37:38.611904       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" with version 2203
I0904 20:37:38.611992       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase: Failed, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:37:38.612144       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:37:38.612169       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj not found
I0904 20:37:38.612179       1 pv_controller.go:1108] reclaimVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: policy is Delete
I0904 20:37:38.612199       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]]
I0904 20:37:38.612253       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" with version 1867
I0904 20:37:38.612278       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase: Bound, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
... skipping 28 lines ...
I0904 20:37:43.856900       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c
I0904 20:37:43.856960       1 pv_controller.go:1435] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" deleted
I0904 20:37:43.856989       1 pv_controller.go:1283] deleteVolumeOperation [pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: success
I0904 20:37:43.873097       1 pv_protection_controller.go:205] Got event on PV pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c
I0904 20:37:43.873121       1 pv_protection_controller.go:125] Processing PV pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c
I0904 20:37:43.873436       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" with version 2266
I0904 20:37:43.873467       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: phase: Failed, bound to: "azuredisk-5194/pvc-b8svj (uid: e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c)", boundByController: true
I0904 20:37:43.873516       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: volume is bound to claim azuredisk-5194/pvc-b8svj
I0904 20:37:43.873552       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: claim azuredisk-5194/pvc-b8svj not found
I0904 20:37:43.873580       1 pv_controller.go:1108] reclaimVolume[pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c]: policy is Delete
I0904 20:37:43.873599       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]]
I0904 20:37:43.873606       1 pv_controller.go:1763] operation "delete-pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c[eec86fe6-2c83-474b-b6ef-b998003a4bb9]" is already running, skipping
I0904 20:37:43.880951       1 pv_controller_base.go:235] volume "pvc-e80ecff9-892e-4bb4-8f54-77ffb3aeaf1c" deleted
... skipping 150 lines ...
I0904 20:38:16.114373       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: claim azuredisk-5194/pvc-l8mpx not found
I0904 20:38:16.114380       1 pv_controller.go:1108] reclaimVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: policy is Delete
I0904 20:38:16.114391       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]]
I0904 20:38:16.114397       1 pv_controller.go:1763] operation "delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]" is already running, skipping
I0904 20:38:16.115727       1 pv_controller.go:1340] isVolumeReleased[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is released
I0904 20:38:16.115738       1 pv_controller.go:1404] doDeleteVolume [pvc-d73a6518-8d09-4568-844a-6993ced4e017]
I0904 20:38:16.144964       1 pv_controller.go:1259] deletion of volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:38:16.144982       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: set phase Failed
I0904 20:38:16.144989       1 pv_controller.go:858] updating PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: set phase Failed
I0904 20:38:16.147373       1 pv_protection_controller.go:205] Got event on PV pvc-d73a6518-8d09-4568-844a-6993ced4e017
I0904 20:38:16.147412       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" with version 2326
I0904 20:38:16.147464       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase: Failed, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:38:16.147487       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is bound to claim azuredisk-5194/pvc-l8mpx
I0904 20:38:16.147529       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: claim azuredisk-5194/pvc-l8mpx not found
I0904 20:38:16.147537       1 pv_controller.go:1108] reclaimVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: policy is Delete
I0904 20:38:16.147547       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]]
I0904 20:38:16.147553       1 pv_controller.go:1763] operation "delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]" is already running, skipping
I0904 20:38:16.148035       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" with version 2326
I0904 20:38:16.148058       1 pv_controller.go:879] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" entered phase "Failed"
I0904 20:38:16.148069       1 pv_controller.go:901] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
E0904 20:38:16.148118       1 goroutinemap.go:150] Operation for "delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]" failed. No retries permitted until 2022-09-04 20:38:16.648098345 +0000 UTC m=+829.095646313 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted
I0904 20:38:16.148354       1 event.go:291] "Event occurred" object="pvc-d73a6518-8d09-4568-844a-6993ced4e017" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_0), could not be deleted"
I0904 20:38:18.655598       1 gc_controller.go:161] GC'ing orphaned
I0904 20:38:18.655632       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:38:18.874068       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0904 20:38:20.549562       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="70.001µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40568" resp=200
I0904 20:38:21.165277       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000000"
... skipping 6 lines ...
I0904 20:38:21.234519       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017 from node "capz-ynxxeg-mp-0000000"
I0904 20:38:21.234555       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017"
I0904 20:38:21.234647       1 azure_controller_vmss.go:175] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017)
I0904 20:38:23.589705       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:38:23.614355       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:38:23.614471       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" with version 2326
I0904 20:38:23.614561       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase: Failed, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:38:23.614640       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is bound to claim azuredisk-5194/pvc-l8mpx
I0904 20:38:23.614702       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: claim azuredisk-5194/pvc-l8mpx not found
I0904 20:38:23.614735       1 pv_controller.go:1108] reclaimVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: policy is Delete
I0904 20:38:23.614749       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]]
I0904 20:38:23.614781       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d73a6518-8d09-4568-844a-6993ced4e017] started
I0904 20:38:23.620930       1 pv_controller.go:1340] isVolumeReleased[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is released
I0904 20:38:23.620968       1 pv_controller.go:1404] doDeleteVolume [pvc-d73a6518-8d09-4568-844a-6993ced4e017]
I0904 20:38:23.621007       1 pv_controller.go:1259] deletion of volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) since it's in attaching or detaching state
I0904 20:38:23.621038       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: set phase Failed
I0904 20:38:23.621050       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase Failed already set
E0904 20:38:23.621082       1 goroutinemap.go:150] Operation for "delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]" failed. No retries permitted until 2022-09-04 20:38:24.621059763 +0000 UTC m=+837.068607831 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) since it's in attaching or detaching state
I0904 20:38:23.767996       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:38:30.549277       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="93.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45972" resp=200
I0904 20:38:31.582813       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 55 items received
I0904 20:38:36.511998       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) returned with <nil>
I0904 20:38:36.512043       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017) succeeded
I0904 20:38:36.512215       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017 was detached from node:capz-ynxxeg-mp-0000000
I0904 20:38:36.512246       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017") on node "capz-ynxxeg-mp-0000000" 
I0904 20:38:38.579819       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:38:38.590253       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:38:38.614795       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:38:38.614910       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" with version 2326
I0904 20:38:38.614989       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase: Failed, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:38:38.615029       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is bound to claim azuredisk-5194/pvc-l8mpx
I0904 20:38:38.615088       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: claim azuredisk-5194/pvc-l8mpx not found
I0904 20:38:38.615103       1 pv_controller.go:1108] reclaimVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: policy is Delete
I0904 20:38:38.615118       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]]
I0904 20:38:38.615181       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d73a6518-8d09-4568-844a-6993ced4e017] started
I0904 20:38:38.622308       1 pv_controller.go:1340] isVolumeReleased[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is released
... skipping 5 lines ...
I0904 20:38:43.949459       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d73a6518-8d09-4568-844a-6993ced4e017
I0904 20:38:43.949529       1 pv_controller.go:1435] volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" deleted
I0904 20:38:43.949562       1 pv_controller.go:1283] deleteVolumeOperation [pvc-d73a6518-8d09-4568-844a-6993ced4e017]: success
I0904 20:38:43.960721       1 pv_protection_controller.go:205] Got event on PV pvc-d73a6518-8d09-4568-844a-6993ced4e017
I0904 20:38:43.960750       1 pv_protection_controller.go:125] Processing PV pvc-d73a6518-8d09-4568-844a-6993ced4e017
I0904 20:38:43.960853       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d73a6518-8d09-4568-844a-6993ced4e017" with version 2368
I0904 20:38:43.960959       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: phase: Failed, bound to: "azuredisk-5194/pvc-l8mpx (uid: d73a6518-8d09-4568-844a-6993ced4e017)", boundByController: true
I0904 20:38:43.961048       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: volume is bound to claim azuredisk-5194/pvc-l8mpx
I0904 20:38:43.961146       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: claim azuredisk-5194/pvc-l8mpx not found
I0904 20:38:43.961208       1 pv_controller.go:1108] reclaimVolume[pvc-d73a6518-8d09-4568-844a-6993ced4e017]: policy is Delete
I0904 20:38:43.961248       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]]
I0904 20:38:43.961325       1 pv_controller.go:1763] operation "delete-pvc-d73a6518-8d09-4568-844a-6993ced4e017[da6e4ccb-60e4-4b16-bef9-979b73391379]" is already running, skipping
I0904 20:38:43.969706       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-d73a6518-8d09-4568-844a-6993ced4e017
... skipping 44 lines ...
I0904 20:38:48.605542       1 pv_controller.go:1445] provisionClaim[azuredisk-1353/pvc-f798z]: started
I0904 20:38:48.605650       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-1353/pvc-f798z[ed444965-b83a-4314-8352-b2c61e9b0db4]]
I0904 20:38:48.605804       1 pv_controller.go:1485] provisionClaimOperation [azuredisk-1353/pvc-f798z] started, class: "azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-79947"
I0904 20:38:48.605918       1 pv_controller.go:1500] provisionClaimOperation [azuredisk-1353/pvc-f798z]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0904 20:38:48.609321       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4"
I0904 20:38:48.609585       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-cglpj" duration="24.470876ms"
I0904 20:38:48.609895       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-cglpj" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-cglpj\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:38:48.610094       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-cglpj" startTime="2022-09-04 20:38:48.610020052 +0000 UTC m=+861.057568120"
I0904 20:38:48.610386       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4" (9.820611ms)
I0904 20:38:48.610551       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4", timestamp:time.Time{wall:0xc0bd611623191c55, ext:861036396281, loc:(*time.Location)(0x751a1a0)}}
I0904 20:38:48.610649       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4" (117.002µs)
I0904 20:38:48.613751       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-f798z"
I0904 20:38:48.613762       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-f798z" with version 2396
... skipping 267 lines ...
I0904 20:39:08.661125       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4", timestamp:time.Time{wall:0xc0bd611b249df009, ext:881061878445, loc:(*time.Location)(0x751a1a0)}}
I0904 20:39:08.661355       1 controller_utils.go:938] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf in state Running, deletion time 2022-09-04 20:39:38 +0000 UTC
I0904 20:39:08.661516       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4" (393.806µs)
I0904 20:39:08.661722       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-cglpj-6bcf7555d4-x72lb"
I0904 20:39:08.661746       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-cglpj-6bcf7555d4-x72lb, PodDisruptionBudget controller will avoid syncing.
I0904 20:39:08.661752       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-cglpj-6bcf7555d4-x72lb"
W0904 20:39:08.736340       1 reconciler.go:385] Multi-Attach error for volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4") from node "capz-ynxxeg-mp-0000001" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf on node capz-ynxxeg-mp-0000000
I0904 20:39:08.736447       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-cglpj-6bcf7555d4-x72lb" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-ed444965-b83a-4314-8352-b2c61e9b0db4\" Volume is already used by pod(s) azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf"
I0904 20:39:09.244389       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:39:10.547378       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="111.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48916" resp=200
I0904 20:39:10.549570       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0904 20:39:11.525560       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
I0904 20:39:12.107501       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:39:13.775216       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
... skipping 387 lines ...
I0904 20:40:44.237183       1 pv_controller.go:1108] reclaimVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: policy is Delete
I0904 20:40:44.237220       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]]
I0904 20:40:44.237245       1 pv_controller.go:1763] operation "delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]" is already running, skipping
I0904 20:40:44.237293       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4] started
I0904 20:40:44.238783       1 pv_controller.go:1340] isVolumeReleased[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is released
I0904 20:40:44.238844       1 pv_controller.go:1404] doDeleteVolume [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]
I0904 20:40:44.264545       1 pv_controller.go:1259] deletion of volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:40:44.264613       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: set phase Failed
I0904 20:40:44.264670       1 pv_controller.go:858] updating PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: set phase Failed
I0904 20:40:44.268004       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" with version 2659
I0904 20:40:44.268227       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" with version 2659
I0904 20:40:44.268288       1 pv_controller.go:879] volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" entered phase "Failed"
I0904 20:40:44.268340       1 pv_controller.go:901] volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
E0904 20:40:44.268419       1 goroutinemap.go:150] Operation for "delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]" failed. No retries permitted until 2022-09-04 20:40:44.768400445 +0000 UTC m=+977.215948513 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:40:44.268421       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase: Failed, bound to: "azuredisk-1353/pvc-f798z (uid: ed444965-b83a-4314-8352-b2c61e9b0db4)", boundByController: true
I0904 20:40:44.268497       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is bound to claim azuredisk-1353/pvc-f798z
I0904 20:40:44.268541       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: claim azuredisk-1353/pvc-f798z not found
I0904 20:40:44.268583       1 pv_controller.go:1108] reclaimVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: policy is Delete
I0904 20:40:44.268632       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]]
I0904 20:40:44.268670       1 pv_controller.go:1765] operation "delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]" postponed due to exponential backoff
I0904 20:40:44.268213       1 pv_protection_controller.go:205] Got event on PV pvc-ed444965-b83a-4314-8352-b2c61e9b0db4
... skipping 12 lines ...
I0904 20:40:52.284967       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 from node "capz-ynxxeg-mp-0000001"
I0904 20:40:52.285004       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4"
I0904 20:40:52.285104       1 azure_controller_vmss.go:175] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4)
I0904 20:40:53.594875       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:40:53.620511       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:40:53.620612       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" with version 2659
I0904 20:40:53.620688       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase: Failed, bound to: "azuredisk-1353/pvc-f798z (uid: ed444965-b83a-4314-8352-b2c61e9b0db4)", boundByController: true
I0904 20:40:53.620757       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is bound to claim azuredisk-1353/pvc-f798z
I0904 20:40:53.620807       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: claim azuredisk-1353/pvc-f798z not found
I0904 20:40:53.620817       1 pv_controller.go:1108] reclaimVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: policy is Delete
I0904 20:40:53.620831       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]]
I0904 20:40:53.620881       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4] started
I0904 20:40:53.627566       1 pv_controller.go:1340] isVolumeReleased[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is released
I0904 20:40:53.627588       1 pv_controller.go:1404] doDeleteVolume [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]
I0904 20:40:53.627623       1 pv_controller.go:1259] deletion of volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) since it's in attaching or detaching state
I0904 20:40:53.627640       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: set phase Failed
I0904 20:40:53.627652       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase Failed already set
E0904 20:40:53.627722       1 goroutinemap.go:150] Operation for "delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]" failed. No retries permitted until 2022-09-04 20:40:54.627666692 +0000 UTC m=+987.075214760 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) since it's in attaching or detaching state
I0904 20:40:53.790749       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-control-plane-lcqxk ReadyCondition updated. Updating timestamp.
I0904 20:40:53.791052       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:40:54.593756       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:40:58.411472       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-4weobx" (10.6µs)
I0904 20:40:58.660539       1 gc_controller.go:161] GC'ing orphaned
I0904 20:40:58.660568       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:41:00.548928       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47384" resp=200
I0904 20:41:03.579084       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 529 items received
I0904 20:41:05.565808       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0904 20:41:08.583728       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:41:08.595888       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:41:08.621409       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:41:08.621523       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" with version 2659
I0904 20:41:08.621602       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase: Failed, bound to: "azuredisk-1353/pvc-f798z (uid: ed444965-b83a-4314-8352-b2c61e9b0db4)", boundByController: true
I0904 20:41:08.621742       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is bound to claim azuredisk-1353/pvc-f798z
I0904 20:41:08.621767       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: claim azuredisk-1353/pvc-f798z not found
I0904 20:41:08.621775       1 pv_controller.go:1108] reclaimVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: policy is Delete
I0904 20:41:08.621791       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]]
I0904 20:41:08.621827       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4] started
I0904 20:41:08.627730       1 pv_controller.go:1340] isVolumeReleased[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is released
I0904 20:41:08.627747       1 pv_controller.go:1404] doDeleteVolume [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]
I0904 20:41:08.627778       1 pv_controller.go:1259] deletion of volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) since it's in attaching or detaching state
I0904 20:41:08.627796       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: set phase Failed
I0904 20:41:08.627807       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase Failed already set
E0904 20:41:08.627840       1 goroutinemap.go:150] Operation for "delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]" failed. No retries permitted until 2022-09-04 20:41:10.627820973 +0000 UTC m=+1003.075368941 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) since it's in attaching or detaching state
I0904 20:41:09.304864       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:41:10.547696       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="89.502µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45702" resp=200
I0904 20:41:12.612610       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) returned with <nil>
I0904 20:41:12.612669       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4) succeeded
I0904 20:41:12.612688       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4 was detached from node:capz-ynxxeg-mp-0000001
I0904 20:41:12.612711       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4") on node "capz-ynxxeg-mp-0000001" 
... skipping 7 lines ...
I0904 20:41:20.547108       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="73.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46946" resp=200
I0904 20:41:21.542891       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 23 items received
I0904 20:41:22.573932       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 0 items received
I0904 20:41:23.596278       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:41:23.621923       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:41:23.622195       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" with version 2659
I0904 20:41:23.622383       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase: Failed, bound to: "azuredisk-1353/pvc-f798z (uid: ed444965-b83a-4314-8352-b2c61e9b0db4)", boundByController: true
I0904 20:41:23.622521       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is bound to claim azuredisk-1353/pvc-f798z
I0904 20:41:23.622587       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: claim azuredisk-1353/pvc-f798z not found
I0904 20:41:23.622613       1 pv_controller.go:1108] reclaimVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: policy is Delete
I0904 20:41:23.622648       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]]
I0904 20:41:23.622701       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4] started
I0904 20:41:23.627150       1 pv_controller.go:1340] isVolumeReleased[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is released
... skipping 2 lines ...
I0904 20:41:28.923529       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4
I0904 20:41:28.923774       1 pv_controller.go:1435] volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" deleted
I0904 20:41:28.923864       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: success
I0904 20:41:28.929088       1 pv_protection_controller.go:205] Got event on PV pvc-ed444965-b83a-4314-8352-b2c61e9b0db4
I0904 20:41:28.929112       1 pv_protection_controller.go:125] Processing PV pvc-ed444965-b83a-4314-8352-b2c61e9b0db4
I0904 20:41:28.929265       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" with version 2726
I0904 20:41:28.929349       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: phase: Failed, bound to: "azuredisk-1353/pvc-f798z (uid: ed444965-b83a-4314-8352-b2c61e9b0db4)", boundByController: true
I0904 20:41:28.929417       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: volume is bound to claim azuredisk-1353/pvc-f798z
I0904 20:41:28.929442       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: claim azuredisk-1353/pvc-f798z not found
I0904 20:41:28.929450       1 pv_controller.go:1108] reclaimVolume[pvc-ed444965-b83a-4314-8352-b2c61e9b0db4]: policy is Delete
I0904 20:41:28.929465       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ed444965-b83a-4314-8352-b2c61e9b0db4[7ce67fd8-f70d-48ab-a9c1-6ef2751785ca]]
I0904 20:41:28.929537       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ed444965-b83a-4314-8352-b2c61e9b0db4] started
I0904 20:41:28.933287       1 pv_controller.go:1243] Volume "pvc-ed444965-b83a-4314-8352-b2c61e9b0db4" is already being deleted
... skipping 79 lines ...
I0904 20:41:34.511603       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4538/pvc-cs65p] status: phase Bound already set
I0904 20:41:34.511616       1 pv_controller.go:1038] volume "pvc-95d2612f-e1a6-4285-8802-e770096c6a05" bound to claim "azuredisk-4538/pvc-cs65p"
I0904 20:41:34.511633       1 pv_controller.go:1039] volume "pvc-95d2612f-e1a6-4285-8802-e770096c6a05" status after binding: phase: Bound, bound to: "azuredisk-4538/pvc-cs65p (uid: 95d2612f-e1a6-4285-8802-e770096c6a05)", boundByController: true
I0904 20:41:34.511647       1 pv_controller.go:1040] claim "azuredisk-4538/pvc-cs65p" status after binding: phase: Bound, bound to: "pvc-95d2612f-e1a6-4285-8802-e770096c6a05", bindCompleted: true, boundByController: true
I0904 20:41:35.438564       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1353
I0904 20:41:35.464921       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-g5fq4, uid 5ff21795-4894-450a-aaf4-a58adaf2a4f3, event type delete
E0904 20:41:35.477860       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-ktdgl" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0904 20:41:35.488439       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf.1711c2cef553653d, uid b566c89c-f134-44f0-8271-0647ad287824, event type delete
I0904 20:41:35.491872       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf.1711c2d16dc1edf2, uid f1c68049-3acb-452a-9f3b-c56dee00fd9d, event type delete
I0904 20:41:35.494892       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf.1711c2d21e64b036, uid c2f47708-d702-4d10-a87a-8ea93cf92c27, event type delete
I0904 20:41:35.497696       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf.1711c2d220fd9d9c, uid f9cf3680-a966-476f-9531-4d6a942057b4, event type delete
I0904 20:41:35.500708       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf.1711c2d227cb1c20, uid 60ffe7a5-b845-4a4b-ab13-2bfa47af37ce, event type delete
I0904 20:41:35.504688       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-cglpj-6bcf7555d4-fm4sf.1711c2d2ea93e3d4, uid 9b0d2f9d-e142-4714-b56c-1307edcbe920, event type delete
... skipping 141 lines ...
I0904 20:41:54.037898       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-59/pvc-zk4v4]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0904 20:41:54.037920       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-59/pvc-zk4v4]: no volume found
I0904 20:41:54.037940       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4] status: set phase Pending
I0904 20:41:54.037953       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4] status: phase Pending already set
I0904 20:41:54.038185       1 event.go:291] "Event occurred" object="azuredisk-59/pvc-zk4v4" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0904 20:41:54.061679       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-gvhbj, uid 68135af0-ea36-46b9-a4df-79c846f2ef4e, event type delete
E0904 20:41:54.072999       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-nmlnb" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0904 20:41:54.089245       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0904 20:41:54.089479       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid 06819671-c9e5-463f-be42-85341926a942, event type delete
I0904 20:41:54.089462       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.5µs)
I0904 20:41:54.144393       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.6µs)
I0904 20:41:54.146380       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8266, estimate: 0, errors: <nil>
I0904 20:41:54.160402       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8266" (147.258954ms)
... skipping 247 lines ...
I0904 20:41:56.954488       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-xcbbw] status: phase Bound already set
I0904 20:41:56.954498       1 pv_controller.go:1038] volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" bound to claim "azuredisk-59/pvc-xcbbw"
I0904 20:41:56.954513       1 pv_controller.go:1039] volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:41:56.954527       1 pv_controller.go:1040] claim "azuredisk-59/pvc-xcbbw" status after binding: phase: Bound, bound to: "pvc-8e7977b0-e355-4334-a34a-5c472b859209", bindCompleted: true, boundByController: true
I0904 20:41:57.242897       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7996
I0904 20:41:57.272840       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7996, name default-token-zlsz5, uid 9abf0e38-2d8a-4cca-8d98-77102f2b3dda, event type delete
E0904 20:41:57.290987       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7996/default: secrets "default-token-zncvl" is forbidden: unable to create new content in namespace azuredisk-7996 because it is being terminated
I0904 20:41:57.294001       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7996, name kube-root-ca.crt, uid 8a80796e-f443-4f4a-abd7-ab44d6606d38, event type delete
I0904 20:41:57.295545       1 publisher.go:186] Finished syncing namespace "azuredisk-7996" (1.313121ms)
I0904 20:41:57.352327       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7996/default), service account deleted, removing tokens
I0904 20:41:57.352555       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (2.6µs)
I0904 20:41:57.352575       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7996, name default, uid fdbace68-0806-4617-98b2-bd6d86e747e7, event type delete
I0904 20:41:57.386742       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (1.5µs)
... skipping 318 lines ...
I0904 20:42:37.380257       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:42:37.380282       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:42:37.380295       1 pv_controller.go:1763] operation "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" is already running, skipping
I0904 20:42:37.380354       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8e7977b0-e355-4334-a34a-5c472b859209] started
I0904 20:42:37.381849       1 pv_controller.go:1340] isVolumeReleased[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is released
I0904 20:42:37.381866       1 pv_controller.go:1404] doDeleteVolume [pvc-8e7977b0-e355-4334-a34a-5c472b859209]
I0904 20:42:37.443005       1 pv_controller.go:1259] deletion of volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:42:37.443026       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: set phase Failed
I0904 20:42:37.443036       1 pv_controller.go:858] updating PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: set phase Failed
I0904 20:42:37.446622       1 pv_protection_controller.go:205] Got event on PV pvc-8e7977b0-e355-4334-a34a-5c472b859209
I0904 20:42:37.446897       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 2975
I0904 20:42:37.446995       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:42:37.447198       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:42:37.447345       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:42:37.447371       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:42:37.447403       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:42:37.447509       1 pv_controller.go:1763] operation "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" is already running, skipping
I0904 20:42:37.447740       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 2975
I0904 20:42:37.447765       1 pv_controller.go:879] volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" entered phase "Failed"
I0904 20:42:37.447934       1 pv_controller.go:901] volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
E0904 20:42:37.448071       1 goroutinemap.go:150] Operation for "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" failed. No retries permitted until 2022-09-04 20:42:37.948031156 +0000 UTC m=+1090.395579124 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:42:37.448396       1 event.go:291] "Event occurred" object="pvc-8e7977b0-e355-4334-a34a-5c472b859209" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted"
I0904 20:42:38.573275       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 49 items received
I0904 20:42:38.585042       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:42:38.601355       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:42:38.625404       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:42:38.625657       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 2975
I0904 20:42:38.625795       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:42:38.625990       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:42:38.626098       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:42:38.626131       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:42:38.626330       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:42:38.626446       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112" with version 2870
I0904 20:42:38.626639       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: phase: Bound, bound to: "azuredisk-59/pvc-qm7qb (uid: b0a75559-c1a4-4273-902f-c5a0ecf9e112)", boundByController: true
... skipping 41 lines ...
I0904 20:42:38.630882       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4] status: phase Bound already set
I0904 20:42:38.630896       1 pv_controller.go:1038] volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" bound to claim "azuredisk-59/pvc-zk4v4"
I0904 20:42:38.630913       1 pv_controller.go:1039] volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-zk4v4 (uid: d761d515-66ad-4204-996b-64ef22162cae)", boundByController: true
I0904 20:42:38.630931       1 pv_controller.go:1040] claim "azuredisk-59/pvc-zk4v4" status after binding: phase: Bound, bound to: "pvc-d761d515-66ad-4204-996b-64ef22162cae", bindCompleted: true, boundByController: true
I0904 20:42:38.631098       1 pv_controller.go:1340] isVolumeReleased[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is released
I0904 20:42:38.631113       1 pv_controller.go:1404] doDeleteVolume [pvc-8e7977b0-e355-4334-a34a-5c472b859209]
I0904 20:42:38.654402       1 pv_controller.go:1259] deletion of volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:42:38.654423       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: set phase Failed
I0904 20:42:38.654432       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase Failed already set
E0904 20:42:38.654460       1 goroutinemap.go:150] Operation for "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" failed. No retries permitted until 2022-09-04 20:42:39.65443993 +0000 UTC m=+1092.101987998 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:42:38.663574       1 gc_controller.go:161] GC'ing orphaned
I0904 20:42:38.663598       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:42:39.339995       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:42:40.547500       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="57.901µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54236" resp=200
I0904 20:42:42.321323       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ynxxeg-mp-0000001"
I0904 20:42:42.321360       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d761d515-66ad-4204-996b-64ef22162cae to the node "capz-ynxxeg-mp-0000001" mounted false
... skipping 74 lines ...
I0904 20:42:53.627664       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: volume is bound to claim azuredisk-59/pvc-zk4v4
I0904 20:42:53.627679       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: claim azuredisk-59/pvc-zk4v4 found: phase: Bound, bound to: "pvc-d761d515-66ad-4204-996b-64ef22162cae", bindCompleted: true, boundByController: true
I0904 20:42:53.627693       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: all is bound
I0904 20:42:53.627701       1 pv_controller.go:858] updating PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: set phase Bound
I0904 20:42:53.627711       1 pv_controller.go:861] updating PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: phase Bound already set
I0904 20:42:53.627724       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 2975
I0904 20:42:53.627748       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:42:53.627770       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:42:53.627790       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:42:53.627798       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:42:53.627814       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:42:53.627861       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8e7977b0-e355-4334-a34a-5c472b859209] started
I0904 20:42:53.637652       1 pv_controller.go:1340] isVolumeReleased[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is released
I0904 20:42:53.637670       1 pv_controller.go:1404] doDeleteVolume [pvc-8e7977b0-e355-4334-a34a-5c472b859209]
I0904 20:42:53.662194       1 pv_controller.go:1259] deletion of volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:42:53.662218       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: set phase Failed
I0904 20:42:53.662227       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase Failed already set
E0904 20:42:53.662254       1 goroutinemap.go:150] Operation for "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" failed. No retries permitted until 2022-09-04 20:42:55.662235865 +0000 UTC m=+1108.109783933 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:42:57.584690       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d761d515-66ad-4204-996b-64ef22162cae) returned with <nil>
I0904 20:42:57.584750       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d761d515-66ad-4204-996b-64ef22162cae) succeeded
I0904 20:42:57.584805       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d761d515-66ad-4204-996b-64ef22162cae was detached from node:capz-ynxxeg-mp-0000001
I0904 20:42:57.584918       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-d761d515-66ad-4204-996b-64ef22162cae") on node "capz-ynxxeg-mp-0000001" 
I0904 20:42:57.585109       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-ynxxeg-mp-0000001, refreshing the cache
I0904 20:42:57.749884       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112"
... skipping 48 lines ...
I0904 20:43:08.629910       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4]: already bound to "pvc-d761d515-66ad-4204-996b-64ef22162cae"
I0904 20:43:08.629966       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4] status: set phase Bound
I0904 20:43:08.630026       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4] status: phase Bound already set
I0904 20:43:08.630083       1 pv_controller.go:1038] volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" bound to claim "azuredisk-59/pvc-zk4v4"
I0904 20:43:08.630174       1 pv_controller.go:1039] volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-zk4v4 (uid: d761d515-66ad-4204-996b-64ef22162cae)", boundByController: true
I0904 20:43:08.630231       1 pv_controller.go:1040] claim "azuredisk-59/pvc-zk4v4" status after binding: phase: Bound, bound to: "pvc-d761d515-66ad-4204-996b-64ef22162cae", bindCompleted: true, boundByController: true
I0904 20:43:08.629041       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:43:08.630408       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:43:08.630487       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:43:08.630507       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:43:08.630555       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:43:08.630599       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112" with version 2870
I0904 20:43:08.630657       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: phase: Bound, bound to: "azuredisk-59/pvc-qm7qb (uid: b0a75559-c1a4-4273-902f-c5a0ecf9e112)", boundByController: true
... skipping 2 lines ...
I0904 20:43:08.630978       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: claim azuredisk-59/pvc-qm7qb found: phase: Bound, bound to: "pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112", bindCompleted: true, boundByController: true
I0904 20:43:08.631049       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: all is bound
I0904 20:43:08.631101       1 pv_controller.go:858] updating PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: set phase Bound
I0904 20:43:08.631142       1 pv_controller.go:861] updating PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: phase Bound already set
I0904 20:43:08.636786       1 pv_controller.go:1340] isVolumeReleased[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is released
I0904 20:43:08.636806       1 pv_controller.go:1404] doDeleteVolume [pvc-8e7977b0-e355-4334-a34a-5c472b859209]
I0904 20:43:08.673834       1 pv_controller.go:1259] deletion of volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:43:08.674005       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: set phase Failed
I0904 20:43:08.674025       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase Failed already set
E0904 20:43:08.674054       1 goroutinemap.go:150] Operation for "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" failed. No retries permitted until 2022-09-04 20:43:12.674035265 +0000 UTC m=+1125.121583333 (durationBeforeRetry 4s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:43:09.350877       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:43:10.546694       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="56.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:33830" resp=200
I0904 20:43:18.197812       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112) returned with <nil>
I0904 20:43:18.197891       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112) succeeded
I0904 20:43:18.197908       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112 was detached from node:capz-ynxxeg-mp-0000001
I0904 20:43:18.197931       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112") on node "capz-ynxxeg-mp-0000001" 
... skipping 18 lines ...
I0904 20:43:23.628861       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: volume is bound to claim azuredisk-59/pvc-zk4v4
I0904 20:43:23.628907       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: claim azuredisk-59/pvc-zk4v4 found: phase: Bound, bound to: "pvc-d761d515-66ad-4204-996b-64ef22162cae", bindCompleted: true, boundByController: true
I0904 20:43:23.628941       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: all is bound
I0904 20:43:23.628987       1 pv_controller.go:858] updating PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: set phase Bound
I0904 20:43:23.629041       1 pv_controller.go:861] updating PersistentVolume[pvc-d761d515-66ad-4204-996b-64ef22162cae]: phase Bound already set
I0904 20:43:23.629109       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 2975
I0904 20:43:23.629169       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:43:23.629233       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:43:23.629281       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:43:23.629323       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:43:23.629364       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:43:23.629421       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8e7977b0-e355-4334-a34a-5c472b859209] started
I0904 20:43:23.629799       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-qm7qb" with version 2873
... skipping 27 lines ...
I0904 20:43:23.631167       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-zk4v4] status: phase Bound already set
I0904 20:43:23.631180       1 pv_controller.go:1038] volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" bound to claim "azuredisk-59/pvc-zk4v4"
I0904 20:43:23.631202       1 pv_controller.go:1039] volume "pvc-d761d515-66ad-4204-996b-64ef22162cae" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-zk4v4 (uid: d761d515-66ad-4204-996b-64ef22162cae)", boundByController: true
I0904 20:43:23.631221       1 pv_controller.go:1040] claim "azuredisk-59/pvc-zk4v4" status after binding: phase: Bound, bound to: "pvc-d761d515-66ad-4204-996b-64ef22162cae", bindCompleted: true, boundByController: true
I0904 20:43:23.635908       1 pv_controller.go:1340] isVolumeReleased[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is released
I0904 20:43:23.635925       1 pv_controller.go:1404] doDeleteVolume [pvc-8e7977b0-e355-4334-a34a-5c472b859209]
I0904 20:43:23.635958       1 pv_controller.go:1259] deletion of volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) since it's in attaching or detaching state
I0904 20:43:23.635972       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: set phase Failed
I0904 20:43:23.635982       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase Failed already set
E0904 20:43:23.636008       1 goroutinemap.go:150] Operation for "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" failed. No retries permitted until 2022-09-04 20:43:31.635990864 +0000 UTC m=+1144.083538932 (durationBeforeRetry 8s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) since it's in attaching or detaching state
I0904 20:43:30.552365       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="103.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48058" resp=200
I0904 20:43:33.570189       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) returned with <nil>
I0904 20:43:33.570253       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209) succeeded
I0904 20:43:33.570264       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209 was detached from node:capz-ynxxeg-mp-0000001
I0904 20:43:33.570344       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209") on node "capz-ynxxeg-mp-0000001" 
I0904 20:43:38.585974       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:43:38.604259       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:43:38.628946       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:43:38.629091       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 2975
I0904 20:43:38.629184       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:43:38.629264       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:43:38.629331       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:43:38.629352       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:43:38.629386       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:43:38.629434       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112" with version 2870
I0904 20:43:38.629501       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b0a75559-c1a4-4273-902f-c5a0ecf9e112]: phase: Bound, bound to: "azuredisk-59/pvc-qm7qb (uid: b0a75559-c1a4-4273-902f-c5a0ecf9e112)", boundByController: true
... skipping 51 lines ...
I0904 20:43:43.859693       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-8e7977b0-e355-4334-a34a-5c472b859209
I0904 20:43:43.859722       1 pv_controller.go:1435] volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" deleted
I0904 20:43:43.859739       1 pv_controller.go:1283] deleteVolumeOperation [pvc-8e7977b0-e355-4334-a34a-5c472b859209]: success
I0904 20:43:43.869811       1 pv_protection_controller.go:205] Got event on PV pvc-8e7977b0-e355-4334-a34a-5c472b859209
I0904 20:43:43.869838       1 pv_protection_controller.go:125] Processing PV pvc-8e7977b0-e355-4334-a34a-5c472b859209
I0904 20:43:43.870115       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e7977b0-e355-4334-a34a-5c472b859209" with version 3074
I0904 20:43:43.870143       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: phase: Failed, bound to: "azuredisk-59/pvc-xcbbw (uid: 8e7977b0-e355-4334-a34a-5c472b859209)", boundByController: true
I0904 20:43:43.870170       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: volume is bound to claim azuredisk-59/pvc-xcbbw
I0904 20:43:43.870186       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: claim azuredisk-59/pvc-xcbbw not found
I0904 20:43:43.870194       1 pv_controller.go:1108] reclaimVolume[pvc-8e7977b0-e355-4334-a34a-5c472b859209]: policy is Delete
I0904 20:43:43.870207       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]]
I0904 20:43:43.870214       1 pv_controller.go:1763] operation "delete-pvc-8e7977b0-e355-4334-a34a-5c472b859209[04ed4177-7ede-4fb5-b30d-334b02446f41]" is already running, skipping
I0904 20:43:43.875067       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-8e7977b0-e355-4334-a34a-5c472b859209
... skipping 358 lines ...
I0904 20:44:11.161840       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-sg4hv"
I0904 20:44:11.188509       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-59
I0904 20:44:11.209721       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-09a599d9-3054-4933-8be5-390b55fec905" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-09a599d9-3054-4933-8be5-390b55fec905") from node "capz-ynxxeg-mp-0000001" 
I0904 20:44:11.209759       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1") from node "capz-ynxxeg-mp-0000001" 
I0904 20:44:11.209843       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-ynxxeg-mp-0000001, refreshing the cache
I0904 20:44:11.221344       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-59, name default-token-hqgv4, uid 600ee418-9b62-44ae-85b9-30718a6aba05, event type delete
E0904 20:44:11.236887       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-59/default: secrets "default-token-rqmfj" is forbidden: unable to create new content in namespace azuredisk-59 because it is being terminated
I0904 20:44:11.283915       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-r694j.1711c2fa36d00b28, uid ab4ed455-ca59-4bc5-9ea2-6ce4ec619d99, event type delete
I0904 20:44:11.287862       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-r694j.1711c2fcc4c8040e, uid 2d3e8e89-6ee2-49e0-bc48-b01346bfd9bb, event type delete
I0904 20:44:11.290969       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-r694j.1711c2ff4c1774f0, uid 157259ab-7590-43b6-85fb-a1a45f283b0c, event type delete
I0904 20:44:11.294246       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-r694j.1711c301d919212f, uid f34a5a8e-0007-4a6b-a6b5-4f18f39e29cf, event type delete
I0904 20:44:11.297259       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-r694j.1711c3025774114c, uid 58ad20ae-3924-4dfb-8cef-74f28b0af9f9, event type delete
I0904 20:44:11.301102       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-r694j.1711c3025a1032c8, uid 56bbffa3-2ce3-4039-9c99-83da0d494c16, event type delete
... skipping 260 lines ...
I0904 20:44:42.600223       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: claim azuredisk-2546/pvc-8g4kg not found
I0904 20:44:42.600230       1 pv_controller.go:1108] reclaimVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: policy is Delete
I0904 20:44:42.600242       1 pv_controller.go:1752] scheduleOperation[delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]]
I0904 20:44:42.600250       1 pv_controller.go:1763] operation "delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]" is already running, skipping
I0904 20:44:42.607577       1 pv_controller.go:1340] isVolumeReleased[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: volume is released
I0904 20:44:42.607594       1 pv_controller.go:1404] doDeleteVolume [pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]
I0904 20:44:42.635038       1 pv_controller.go:1259] deletion of volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:44:42.635174       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: set phase Failed
I0904 20:44:42.635280       1 pv_controller.go:858] updating PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: set phase Failed
I0904 20:44:42.635823       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 from node "capz-ynxxeg-mp-0000001"
I0904 20:44:42.635859       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1"
I0904 20:44:42.635874       1 azure_controller_vmss.go:175] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1)
I0904 20:44:42.635994       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-09a599d9-3054-4933-8be5-390b55fec905 from node "capz-ynxxeg-mp-0000001"
I0904 20:44:42.641301       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" with version 3248
I0904 20:44:42.641332       1 pv_controller.go:879] volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" entered phase "Failed"
I0904 20:44:42.641381       1 pv_controller.go:901] volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
E0904 20:44:42.641465       1 goroutinemap.go:150] Operation for "delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]" failed. No retries permitted until 2022-09-04 20:44:43.141404449 +0000 UTC m=+1215.588952417 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:44:42.641769       1 event.go:291] "Event occurred" object="pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted"
I0904 20:44:42.642050       1 pv_protection_controller.go:205] Got event on PV pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1
I0904 20:44:42.642106       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" with version 3248
I0904 20:44:42.642131       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: phase: Failed, bound to: "azuredisk-2546/pvc-8g4kg (uid: 50efa8ea-d0fb-4968-9b68-533beea1bfe1)", boundByController: true
I0904 20:44:42.642211       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: volume is bound to claim azuredisk-2546/pvc-8g4kg
I0904 20:44:42.642231       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: claim azuredisk-2546/pvc-8g4kg not found
I0904 20:44:42.642238       1 pv_controller.go:1108] reclaimVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: policy is Delete
I0904 20:44:42.642278       1 pv_controller.go:1752] scheduleOperation[delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]]
I0904 20:44:42.642288       1 pv_controller.go:1765] operation "delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]" postponed due to exponential backoff
I0904 20:44:43.826556       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:44:43.826761       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:44:46.825342       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0904 20:44:50.558700       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="94.202µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37954" resp=200
I0904 20:44:53.608912       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:44:53.636339       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:44:53.636408       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" with version 3248
I0904 20:44:53.636447       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: phase: Failed, bound to: "azuredisk-2546/pvc-8g4kg (uid: 50efa8ea-d0fb-4968-9b68-533beea1bfe1)", boundByController: true
I0904 20:44:53.636499       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: volume is bound to claim azuredisk-2546/pvc-8g4kg
I0904 20:44:53.636516       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: claim azuredisk-2546/pvc-8g4kg not found
I0904 20:44:53.636563       1 pv_controller.go:1108] reclaimVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: policy is Delete
I0904 20:44:53.636585       1 pv_controller.go:1752] scheduleOperation[delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]]
I0904 20:44:53.636607       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-09a599d9-3054-4933-8be5-390b55fec905" with version 3148
I0904 20:44:53.636653       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-09a599d9-3054-4933-8be5-390b55fec905]: phase: Bound, bound to: "azuredisk-2546/pvc-clllt (uid: 09a599d9-3054-4933-8be5-390b55fec905)", boundByController: true
... skipping 18 lines ...
I0904 20:44:53.638254       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-clllt] status: phase Bound already set
I0904 20:44:53.638357       1 pv_controller.go:1038] volume "pvc-09a599d9-3054-4933-8be5-390b55fec905" bound to claim "azuredisk-2546/pvc-clllt"
I0904 20:44:53.638456       1 pv_controller.go:1039] volume "pvc-09a599d9-3054-4933-8be5-390b55fec905" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-clllt (uid: 09a599d9-3054-4933-8be5-390b55fec905)", boundByController: true
I0904 20:44:53.638572       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-clllt" status after binding: phase: Bound, bound to: "pvc-09a599d9-3054-4933-8be5-390b55fec905", bindCompleted: true, boundByController: true
I0904 20:44:53.646448       1 pv_controller.go:1340] isVolumeReleased[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: volume is released
I0904 20:44:53.646464       1 pv_controller.go:1404] doDeleteVolume [pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]
I0904 20:44:53.646516       1 pv_controller.go:1259] deletion of volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) since it's in attaching or detaching state
I0904 20:44:53.646531       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: set phase Failed
I0904 20:44:53.646539       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: phase Failed already set
E0904 20:44:53.646564       1 goroutinemap.go:150] Operation for "delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]" failed. No retries permitted until 2022-09-04 20:44:54.646546959 +0000 UTC m=+1227.094095027 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) since it's in attaching or detaching state
I0904 20:44:54.252122       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0904 20:44:55.925071       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 7 items received
I0904 20:44:57.938867       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) returned with <nil>
I0904 20:44:57.938922       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1) succeeded
I0904 20:44:57.938960       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1 was detached from node:capz-ynxxeg-mp-0000001
I0904 20:44:57.938992       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1") on node "capz-ynxxeg-mp-0000001" 
... skipping 6 lines ...
I0904 20:45:00.549566       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="86.501µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48872" resp=200
I0904 20:45:04.529296       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 1 items received
I0904 20:45:08.588194       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:45:08.609704       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:45:08.637234       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:45:08.637523       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" with version 3248
I0904 20:45:08.637718       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: phase: Failed, bound to: "azuredisk-2546/pvc-8g4kg (uid: 50efa8ea-d0fb-4968-9b68-533beea1bfe1)", boundByController: true
I0904 20:45:08.637810       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: volume is bound to claim azuredisk-2546/pvc-8g4kg
I0904 20:45:08.637864       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: claim azuredisk-2546/pvc-8g4kg not found
I0904 20:45:08.637875       1 pv_controller.go:1108] reclaimVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: policy is Delete
I0904 20:45:08.637891       1 pv_controller.go:1752] scheduleOperation[delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]]
I0904 20:45:08.637614       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-clllt" with version 3150
I0904 20:45:08.637984       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-clllt]: phase: Bound, bound to: "pvc-09a599d9-3054-4933-8be5-390b55fec905", bindCompleted: true, boundByController: true
... skipping 30 lines ...
I0904 20:45:13.886867       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1
I0904 20:45:13.886942       1 pv_controller.go:1435] volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" deleted
I0904 20:45:13.886972       1 pv_controller.go:1283] deleteVolumeOperation [pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: success
I0904 20:45:13.896144       1 pv_protection_controller.go:205] Got event on PV pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1
I0904 20:45:13.896745       1 pv_protection_controller.go:125] Processing PV pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1
I0904 20:45:13.896699       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" with version 3295
I0904 20:45:13.897545       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: phase: Failed, bound to: "azuredisk-2546/pvc-8g4kg (uid: 50efa8ea-d0fb-4968-9b68-533beea1bfe1)", boundByController: true
I0904 20:45:13.897742       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: volume is bound to claim azuredisk-2546/pvc-8g4kg
I0904 20:45:13.897872       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: claim azuredisk-2546/pvc-8g4kg not found
I0904 20:45:13.897972       1 pv_controller.go:1108] reclaimVolume[pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1]: policy is Delete
I0904 20:45:13.898085       1 pv_controller.go:1752] scheduleOperation[delete-pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1[c8916f28-3247-48c9-b520-b80233f24c8a]]
I0904 20:45:13.898252       1 pv_controller.go:1231] deleteVolumeOperation [pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1] started
I0904 20:45:13.901893       1 pv_controller.go:1243] Volume "pvc-50efa8ea-d0fb-4968-9b68-533beea1bfe1" is already being deleted
... skipping 195 lines ...
I0904 20:45:35.273090       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-8582/pvc-lf8pq[72d7b30a-8f0a-4261-80b4-45e1fede80cf]]
I0904 20:45:35.273190       1 pv_controller.go:1763] operation "provision-azuredisk-8582/pvc-lf8pq[72d7b30a-8f0a-4261-80b4-45e1fede80cf]" is already running, skipping
I0904 20:45:35.279345       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-ynxxeg-dynamic-pvc-655ef002-ee06-42e7-9c0f-9e03ad901702 StorageAccountType:Premium_LRS Size:10
I0904 20:45:35.279492       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf StorageAccountType:StandardSSD_LRS Size:10
I0904 20:45:36.456550       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1598
I0904 20:45:36.469763       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1598, name default-token-76pcx, uid c4c90ba1-bbff-44f1-ab3b-52a12e9dbb10, event type delete
E0904 20:45:36.481591       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1598/default: secrets "default-token-86jcf" is forbidden: unable to create new content in namespace azuredisk-1598 because it is being terminated
I0904 20:45:36.509395       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1598, name kube-root-ca.crt, uid 6ca453ee-a22e-417b-bcf9-8fa1add99e73, event type delete
I0904 20:45:36.512564       1 publisher.go:186] Finished syncing namespace "azuredisk-1598" (3.098957ms)
I0904 20:45:36.553770       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 66 items received
I0904 20:45:36.559268       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1598/default), service account deleted, removing tokens
I0904 20:45:36.559481       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (2.401µs)
I0904 20:45:36.559686       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1598, name default, uid c233d7a8-b7e3-42c9-90a7-9877484d2d5c, event type delete
... skipping 561 lines ...
I0904 20:46:16.140900       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:46:16.140909       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:46:16.140915       1 pv_controller.go:1763] operation "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" is already running, skipping
I0904 20:46:16.140938       1 pv_controller.go:1231] deleteVolumeOperation [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf] started
I0904 20:46:16.142554       1 pv_controller.go:1340] isVolumeReleased[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is released
I0904 20:46:16.142573       1 pv_controller.go:1404] doDeleteVolume [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]
I0904 20:46:16.164032       1 pv_controller.go:1259] deletion of volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:46:16.164054       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: set phase Failed
I0904 20:46:16.164064       1 pv_controller.go:858] updating PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: set phase Failed
I0904 20:46:16.166572       1 pv_protection_controller.go:205] Got event on PV pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf
I0904 20:46:16.166597       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:46:16.166637       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:46:16.166659       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:46:16.166675       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:46:16.166681       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:46:16.166709       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:46:16.166715       1 pv_controller.go:1763] operation "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" is already running, skipping
I0904 20:46:16.166891       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:46:16.166908       1 pv_controller.go:879] volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" entered phase "Failed"
I0904 20:46:16.166917       1 pv_controller.go:901] volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
E0904 20:46:16.166967       1 goroutinemap.go:150] Operation for "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" failed. No retries permitted until 2022-09-04 20:46:16.666933885 +0000 UTC m=+1309.114481853 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:46:16.167003       1 event.go:291] "Event occurred" object="pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted"
I0904 20:46:18.583573       1 controller.go:272] Triggering nodeSync
I0904 20:46:18.583605       1 controller.go:291] nodeSync has been triggered
I0904 20:46:18.583613       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:46:18.583622       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:46:18.583628       1 controller.go:731] It took 1.6401e-05 seconds to finish nodeSyncInternal
... skipping 45 lines ...
I0904 20:46:23.641812       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: volume is bound to claim azuredisk-8582/pvc-r6krw
I0904 20:46:23.641827       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: claim azuredisk-8582/pvc-r6krw found: phase: Bound, bound to: "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac", bindCompleted: true, boundByController: true
I0904 20:46:23.641840       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: all is bound
I0904 20:46:23.641847       1 pv_controller.go:858] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: set phase Bound
I0904 20:46:23.641856       1 pv_controller.go:861] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: phase Bound already set
I0904 20:46:23.641868       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:46:23.641906       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:46:23.641924       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:46:23.641958       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:46:23.641971       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:46:23.641987       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:46:23.642034       1 pv_controller.go:1231] deleteVolumeOperation [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf] started
I0904 20:46:23.642167       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-jrxq5" with version 3399
... skipping 27 lines ...
I0904 20:46:23.642927       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-r6krw] status: phase Bound already set
I0904 20:46:23.642939       1 pv_controller.go:1038] volume "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac" bound to claim "azuredisk-8582/pvc-r6krw"
I0904 20:46:23.642957       1 pv_controller.go:1039] volume "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-r6krw (uid: 4e68e177-6ef9-4a8f-a761-f6b95bf21fac)", boundByController: true
I0904 20:46:23.642992       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-r6krw" status after binding: phase: Bound, bound to: "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac", bindCompleted: true, boundByController: true
I0904 20:46:23.646952       1 pv_controller.go:1340] isVolumeReleased[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is released
I0904 20:46:23.646969       1 pv_controller.go:1404] doDeleteVolume [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]
I0904 20:46:23.672635       1 pv_controller.go:1259] deletion of volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:46:23.672822       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: set phase Failed
I0904 20:46:23.672980       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase Failed already set
E0904 20:46:23.673150       1 goroutinemap.go:150] Operation for "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" failed. No retries permitted until 2022-09-04 20:46:24.673126594 +0000 UTC m=+1317.120674562 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:46:23.841902       1 node_lifecycle_controller.go:1047] Node capz-ynxxeg-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:46:24.172801       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 43 items received
I0904 20:46:24.584804       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 18 items received
I0904 20:46:30.547297       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.501µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55498" resp=200
I0904 20:46:34.368523       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:46:38.106179       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac) returned with <nil>
... skipping 18 lines ...
I0904 20:46:38.643095       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: volume is bound to claim azuredisk-8582/pvc-r6krw
I0904 20:46:38.643110       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: claim azuredisk-8582/pvc-r6krw found: phase: Bound, bound to: "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac", bindCompleted: true, boundByController: true
I0904 20:46:38.643136       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: all is bound
I0904 20:46:38.643147       1 pv_controller.go:858] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: set phase Bound
I0904 20:46:38.643157       1 pv_controller.go:861] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: phase Bound already set
I0904 20:46:38.643174       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:46:38.643195       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:46:38.643214       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:46:38.643235       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:46:38.643246       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:46:38.643260       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:46:38.643289       1 pv_controller.go:1231] deleteVolumeOperation [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf] started
I0904 20:46:38.642750       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-r6krw" with version 3404
... skipping 29 lines ...
I0904 20:46:38.644053       1 pv_controller.go:1039] volume "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-jrxq5 (uid: 655ef002-ee06-42e7-9c0f-9e03ad901702)", boundByController: true
I0904 20:46:38.644071       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-jrxq5" status after binding: phase: Bound, bound to: "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702", bindCompleted: true, boundByController: true
I0904 20:46:38.652382       1 pv_controller.go:1340] isVolumeReleased[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is released
I0904 20:46:38.652403       1 pv_controller.go:1404] doDeleteVolume [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]
I0904 20:46:38.671128       1 gc_controller.go:161] GC'ing orphaned
I0904 20:46:38.671154       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:46:38.673428       1 pv_controller.go:1259] deletion of volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:46:38.673448       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: set phase Failed
I0904 20:46:38.673457       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase Failed already set
E0904 20:46:38.673590       1 goroutinemap.go:150] Operation for "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" failed. No retries permitted until 2022-09-04 20:46:40.673572536 +0000 UTC m=+1333.121120504 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/virtualMachineScaleSets/capz-ynxxeg-mp-0/virtualMachines/capz-ynxxeg-mp-0_1), could not be deleted
I0904 20:46:39.441693       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:46:40.547772       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="109.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57196" resp=200
I0904 20:46:40.566895       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 7 items received
I0904 20:46:41.553615       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 2 items received
I0904 20:46:43.551750       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 28 items received
I0904 20:46:47.533494       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 6 items received
... skipping 37 lines ...
I0904 20:46:53.643674       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-jrxq5] status: set phase Bound
I0904 20:46:53.643696       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-jrxq5] status: phase Bound already set
I0904 20:46:53.643707       1 pv_controller.go:1038] volume "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702" bound to claim "azuredisk-8582/pvc-jrxq5"
I0904 20:46:53.643724       1 pv_controller.go:1039] volume "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-jrxq5 (uid: 655ef002-ee06-42e7-9c0f-9e03ad901702)", boundByController: true
I0904 20:46:53.643763       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-jrxq5" status after binding: phase: Bound, bound to: "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702", bindCompleted: true, boundByController: true
I0904 20:46:53.642819       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:46:53.643794       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:46:53.643900       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:46:53.643965       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:46:53.643974       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:46:53.643989       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:46:53.644049       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702" with version 3396
I0904 20:46:53.644071       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-655ef002-ee06-42e7-9c0f-9e03ad901702]: phase: Bound, bound to: "azuredisk-8582/pvc-jrxq5 (uid: 655ef002-ee06-42e7-9c0f-9e03ad901702)", boundByController: true
... skipping 9 lines ...
I0904 20:46:53.644743       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: all is bound
I0904 20:46:53.644784       1 pv_controller.go:858] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: set phase Bound
I0904 20:46:53.644895       1 pv_controller.go:861] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: phase Bound already set
I0904 20:46:53.644142       1 pv_controller.go:1231] deleteVolumeOperation [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf] started
I0904 20:46:53.660619       1 pv_controller.go:1340] isVolumeReleased[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is released
I0904 20:46:53.660634       1 pv_controller.go:1404] doDeleteVolume [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]
I0904 20:46:53.660666       1 pv_controller.go:1259] deletion of volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) since it's in attaching or detaching state
I0904 20:46:53.660676       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: set phase Failed
I0904 20:46:53.660684       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase Failed already set
E0904 20:46:53.660707       1 goroutinemap.go:150] Operation for "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" failed. No retries permitted until 2022-09-04 20:46:57.660691426 +0000 UTC m=+1350.108239494 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) since it's in attaching or detaching state
I0904 20:46:58.672158       1 gc_controller.go:161] GC'ing orphaned
I0904 20:46:58.672190       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:46:58.677936       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 8 items received
I0904 20:47:00.547395       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="105.802µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59522" resp=200
I0904 20:47:08.591070       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:47:08.615782       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 10 lines ...
I0904 20:47:08.643977       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: volume is bound to claim azuredisk-8582/pvc-r6krw
I0904 20:47:08.644045       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: claim azuredisk-8582/pvc-r6krw found: phase: Bound, bound to: "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac", bindCompleted: true, boundByController: true
I0904 20:47:08.644080       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: all is bound
I0904 20:47:08.644117       1 pv_controller.go:858] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: set phase Bound
I0904 20:47:08.644145       1 pv_controller.go:861] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: phase Bound already set
I0904 20:47:08.644198       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:47:08.644265       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:47:08.644332       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:47:08.644377       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:47:08.644411       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:47:08.644444       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:47:08.644508       1 pv_controller.go:1231] deleteVolumeOperation [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf] started
I0904 20:47:08.643503       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-r6krw" with version 3404
... skipping 27 lines ...
I0904 20:47:08.648730       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-jrxq5] status: phase Bound already set
I0904 20:47:08.648863       1 pv_controller.go:1038] volume "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702" bound to claim "azuredisk-8582/pvc-jrxq5"
I0904 20:47:08.648992       1 pv_controller.go:1039] volume "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-jrxq5 (uid: 655ef002-ee06-42e7-9c0f-9e03ad901702)", boundByController: true
I0904 20:47:08.649126       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-jrxq5" status after binding: phase: Bound, bound to: "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702", bindCompleted: true, boundByController: true
I0904 20:47:08.663296       1 pv_controller.go:1340] isVolumeReleased[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is released
I0904 20:47:08.663334       1 pv_controller.go:1404] doDeleteVolume [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]
I0904 20:47:08.663367       1 pv_controller.go:1259] deletion of volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) since it's in attaching or detaching state
I0904 20:47:08.663379       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: set phase Failed
I0904 20:47:08.663389       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase Failed already set
E0904 20:47:08.663417       1 goroutinemap.go:150] Operation for "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" failed. No retries permitted until 2022-09-04 20:47:16.663398809 +0000 UTC m=+1369.110946777 (durationBeforeRetry 8s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) since it's in attaching or detaching state
I0904 20:47:08.722737       1 azure_controller_vmss.go:187] azureDisk - update(capz-ynxxeg): vm(capz-ynxxeg-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) returned with <nil>
I0904 20:47:08.722779       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf) succeeded
I0904 20:47:08.722788       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf was detached from node:capz-ynxxeg-mp-0000001
I0904 20:47:08.722809       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf") on node "capz-ynxxeg-mp-0000001" 
I0904 20:47:09.458982       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:47:10.558366       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="74.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44290" resp=200
... skipping 32 lines ...
I0904 20:47:23.645638       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: volume is bound to claim azuredisk-8582/pvc-r6krw
I0904 20:47:23.645714       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: claim azuredisk-8582/pvc-r6krw found: phase: Bound, bound to: "pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac", bindCompleted: true, boundByController: true
I0904 20:47:23.645737       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: all is bound
I0904 20:47:23.645746       1 pv_controller.go:858] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: set phase Bound
I0904 20:47:23.645788       1 pv_controller.go:861] updating PersistentVolume[pvc-4e68e177-6ef9-4a8f-a761-f6b95bf21fac]: phase Bound already set
I0904 20:47:23.645812       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3497
I0904 20:47:23.645848       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:47:23.645893       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:47:23.645945       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:47:23.645958       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:47:23.645991       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:47:23.645208       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-jrxq5" with version 3399
I0904 20:47:23.646079       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8582/pvc-jrxq5]: phase: Bound, bound to: "pvc-655ef002-ee06-42e7-9c0f-9e03ad901702", bindCompleted: true, boundByController: true
... skipping 18 lines ...
I0904 20:47:28.890397       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf
I0904 20:47:28.890441       1 pv_controller.go:1435] volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" deleted
I0904 20:47:28.890455       1 pv_controller.go:1283] deleteVolumeOperation [pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: success
I0904 20:47:28.900372       1 pv_protection_controller.go:205] Got event on PV pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf
I0904 20:47:28.900960       1 pv_protection_controller.go:125] Processing PV pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf
I0904 20:47:28.901296       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf" with version 3605
I0904 20:47:28.901334       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: phase: Failed, bound to: "azuredisk-8582/pvc-lf8pq (uid: 72d7b30a-8f0a-4261-80b4-45e1fede80cf)", boundByController: true
I0904 20:47:28.901376       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: volume is bound to claim azuredisk-8582/pvc-lf8pq
I0904 20:47:28.901401       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: claim azuredisk-8582/pvc-lf8pq not found
I0904 20:47:28.901414       1 pv_controller.go:1108] reclaimVolume[pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf]: policy is Delete
I0904 20:47:28.901428       1 pv_controller.go:1752] scheduleOperation[delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]]
I0904 20:47:28.901438       1 pv_controller.go:1763] operation "delete-pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf[5923466b-05f2-4a60-a147-aae865305582]" is already running, skipping
I0904 20:47:28.905199       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-72d7b30a-8f0a-4261-80b4-45e1fede80cf
... skipping 206 lines ...
I0904 20:47:50.502984       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (15.494813ms)
I0904 20:47:50.546709       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="67.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57080" resp=200
I0904 20:47:51.836022       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0904 20:47:51.873080       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 162ddf59-1c9e-4b01-9a06-66f741f56b5d, event type delete
I0904 20:47:51.877247       1 publisher.go:186] Finished syncing namespace "azuredisk-7726" (3.891854ms)
I0904 20:47:51.891402       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-94kln, uid b8c66f18-458f-4569-881d-785927abbf46, event type delete
E0904 20:47:51.902416       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-2rb47" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0904 20:47:51.930591       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0904 20:47:51.930927       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (2.7µs)
I0904 20:47:51.931061       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 530409c6-e7ad-46db-a758-132de47efcbb, event type delete
I0904 20:47:51.964249       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (1.4µs)
I0904 20:47:51.965917       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
I0904 20:47:51.973718       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7726" (141.697546ms)
... skipping 44 lines ...
I0904 20:47:54.004673       1 pv_controller.go:1763] operation "provision-azuredisk-7051/pvc-zw2d4[19f49229-1297-4517-8bbc-20e7d420f3ab]" is already running, skipping
I0904 20:47:54.006439       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-ynxxeg-dynamic-pvc-19f49229-1297-4517-8bbc-20e7d420f3ab StorageAccountType:Standard_LRS Size:10
I0904 20:47:55.178193       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8582
I0904 20:47:55.178212       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (37.4µs)
I0904 20:47:55.378058       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0904 20:47:55.454112       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-qpghl, uid f328494a-7ae7-4574-802b-0e30800575fa, event type delete
E0904 20:47:55.466405       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-lzstr" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0904 20:47:55.474324       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid c34c5366-7386-41d7-aa86-8e3587c04cee, event type delete
I0904 20:47:55.477399       1 publisher.go:186] Finished syncing namespace "azuredisk-1387" (2.986341ms)
I0904 20:47:55.494679       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0904 20:47:55.494853       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (3.2µs)
I0904 20:47:55.495059       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid c8e19825-53a7-4300-a270-b7618af356f6, event type delete
I0904 20:47:55.506357       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.3µs)
... skipping 83 lines ...
I0904 20:47:57.036057       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-b9rr2, PodDisruptionBudget controller will avoid syncing.
I0904 20:47:57.036191       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-b9rr2"
I0904 20:47:57.096574       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-19f49229-1297-4517-8bbc-20e7d420f3ab" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-19f49229-1297-4517-8bbc-20e7d420f3ab") from node "capz-ynxxeg-mp-0000001" 
I0904 20:47:57.135321       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-ynxxeg-mp-0000001, refreshing the cache
I0904 20:47:57.160960       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4547
I0904 20:47:57.182533       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4547, name default-token-sbmtb, uid 6687cef3-43aa-4d52-ba30-3173f775c704, event type delete
E0904 20:47:57.194404       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4547/default: secrets "default-token-p2qfs" is forbidden: unable to create new content in namespace azuredisk-4547 because it is being terminated
I0904 20:47:57.228445       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-ynxxeg-dynamic-pvc-19f49229-1297-4517-8bbc-20e7d420f3ab. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ynxxeg/providers/Microsoft.Compute/disks/capz-ynxxeg-dynamic-pvc-19f49229-1297-4517-8bbc-20e7d420f3ab" to node "capz-ynxxeg-mp-0000001".
I0904 20:47:57.230025       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4547, name kube-root-ca.crt, uid 1b29e4a3-4271-4ab3-90f3-9cd1c3639822, event type delete
I0904 20:47:57.233375       1 publisher.go:186] Finished syncing namespace "azuredisk-4547" (3.302345ms)
I0904 20:47:57.258988       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4547/default), service account deleted, removing tokens
I0904 20:47:57.259257       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (2.6µs)
I0904 20:47:57.259536       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4547, name default, uid c7d24a17-f929-410d-93ed-9e49c92d7b73, event type delete
... skipping 931 lines ...
I0904 20:51:01.692509       1 publisher.go:186] Finished syncing namespace "azuredisk-6200" (4.521763ms)
I0904 20:51:02.607744       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6200" (3.1µs)
I0904 20:51:02.733436       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5320" (7.298601ms)
I0904 20:51:02.735387       1 publisher.go:186] Finished syncing namespace "azuredisk-5320" (9.038525ms)
I0904 20:51:02.918076       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1166
I0904 20:51:02.972420       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1166, name default-token-rr6zz, uid 3d0d9e65-9cef-47ad-93e9-3d5fea69d387, event type delete
E0904 20:51:02.984808       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1166/default: secrets "default-token-hp79f" is forbidden: unable to create new content in namespace azuredisk-1166 because it is being terminated
I0904 20:51:03.009856       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1166, name kube-root-ca.crt, uid 02faf193-1e7d-464d-8ab0-2b772be7d3f5, event type delete
I0904 20:51:03.011947       1 publisher.go:186] Finished syncing namespace "azuredisk-1166" (1.956627ms)
I0904 20:51:03.029405       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1166/default), service account deleted, removing tokens
I0904 20:51:03.029520       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1166" (3µs)
I0904 20:51:03.029610       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1166, name default, uid 3b4a6ae3-1196-4468-8f1f-a33e169e440c, event type delete
I0904 20:51:03.043554       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1166" (1.6µs)
I0904 20:51:03.044356       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1166, estimate: 0, errors: <nil>
I0904 20:51:03.052267       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1166" (136.260079ms)
I0904 20:51:03.643577       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5320" (22.001µs)
I0904 20:51:03.760873       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (5.227472ms)
I0904 20:51:03.762702       1 publisher.go:186] Finished syncing namespace "azuredisk-9103" (6.803994ms)
I0904 20:51:04.509852       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4415
I0904 20:51:04.528527       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4415, name default-token-pvzm7, uid 791ed977-f17e-48e6-af9a-ed9470eb2a30, event type delete
E0904 20:51:04.540160       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4415/default: secrets "default-token-69njv" is forbidden: unable to create new content in namespace azuredisk-4415 because it is being terminated
I0904 20:51:04.550611       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4415, name kube-root-ca.crt, uid def9761d-a8ea-45bf-89fb-e2d112bca27f, event type delete
I0904 20:51:04.553454       1 publisher.go:186] Finished syncing namespace "azuredisk-4415" (2.776138ms)
I0904 20:51:04.629911       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4415" (3.601µs)
I0904 20:51:04.630091       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4415/default), service account deleted, removing tokens
I0904 20:51:04.630112       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4415, name default, uid 1735328b-624f-42a0-bfdd-04f31af2515d, event type delete
I0904 20:51:04.642583       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4415, estimate: 0, errors: <nil>
... skipping 7 lines ...
I0904 20:51:05.184032       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9183" (170.006345ms)
I0904 20:51:05.184068       1 namespace_controller.go:157] Content remaining in namespace azuredisk-9183, waiting 16 seconds
I0904 20:51:05.550166       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-6720
I0904 20:51:05.603853       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-6720, name kube-root-ca.crt, uid 47184c44-0b79-476f-ada6-96cd3d154ea6, event type delete
I0904 20:51:05.606731       1 publisher.go:186] Finished syncing namespace "azuredisk-6720" (2.855539ms)
I0904 20:51:05.619804       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-6720, name default-token-nvdqz, uid 82873c7c-871d-42e2-ba17-e485f5e5926d, event type delete
E0904 20:51:05.632477       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6720/default: secrets "default-token-mz2c2" is forbidden: unable to create new content in namespace azuredisk-6720 because it is being terminated
I0904 20:51:05.654219       1 tokens_controller.go:252] syncServiceAccount(azuredisk-6720/default), service account deleted, removing tokens
I0904 20:51:05.654266       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (2.3µs)
I0904 20:51:05.654341       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-6720, name default, uid f13787d1-04f6-43f0-b399-fa74adfd115c, event type delete
I0904 20:51:05.689858       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-6720, estimate: 0, errors: <nil>
I0904 20:51:05.690242       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (2.5µs)
I0904 20:51:05.701313       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-6720" (153.880023ms)
... skipping 51 lines ...
I0904 20:51:08.661117       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9183/pvc-azuredisk-volume-tester-q8qrz-0] status: set phase Bound
I0904 20:51:08.661280       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9183/pvc-azuredisk-volume-tester-q8qrz-0] status: phase Bound already set
I0904 20:51:08.661445       1 pv_controller.go:1038] volume "pvc-b4b7365d-3808-4226-a046-e13ca3b065a8" bound to claim "azuredisk-9183/pvc-azuredisk-volume-tester-q8qrz-0"
I0904 20:51:08.661608       1 pv_controller.go:1039] volume "pvc-b4b7365d-3808-4226-a046-e13ca3b065a8" status after binding: phase: Bound, bound to: "azuredisk-9183/pvc-azuredisk-volume-tester-q8qrz-0 (uid: b4b7365d-3808-4226-a046-e13ca3b065a8)", boundByController: true
I0904 20:51:08.661797       1 pv_controller.go:1040] claim "azuredisk-9183/pvc-azuredisk-volume-tester-q8qrz-0" status after binding: phase: Bound, bound to: "pvc-b4b7365d-3808-4226-a046-e13ca3b065a8", bindCompleted: true, boundByController: true
I0904 20:51:08.685119       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5320, name default-token-n7lf4, uid 120e4040-423c-48bc-97af-d034088ebc87, event type delete
E0904 20:51:08.701650       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5320/default: secrets "default-token-xhkkq" is forbidden: unable to create new content in namespace azuredisk-5320 because it is being terminated
I0904 20:51:08.730142       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5320, name kube-root-ca.crt, uid 1b2e9f5f-8984-4a5b-8572-d7436b4642e0, event type delete
I0904 20:51:08.732538       1 publisher.go:186] Finished syncing namespace "azuredisk-5320" (2.774238ms)
I0904 20:51:08.764378       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5320/default), service account deleted, removing tokens
I0904 20:51:08.765128       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5320" (3µs)
I0904 20:51:08.765277       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5320, name default, uid aa6b7a87-f00b-4e5b-ac43-c03968b0a4b5, event type delete
I0904 20:51:08.781649       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5320" (1.2µs)
... skipping 3 lines ...
I0904 20:51:09.642942       1 namespace_controller.go:185] Namespace has been deleted azuredisk-4415
I0904 20:51:09.643144       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4415" (238.703µs)
I0904 20:51:09.693417       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9103
I0904 20:51:09.769811       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9103, name kube-root-ca.crt, uid cde7ecce-f41c-4e5e-bc03-be75041cba6a, event type delete
I0904 20:51:09.771737       1 publisher.go:186] Finished syncing namespace "azuredisk-9103" (2.13913ms)
I0904 20:51:09.778677       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9103, name default-token-bcxsl, uid bfe82df9-231f-4128-b239-71ee32fda256, event type delete
E0904 20:51:09.791850       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9103/default: secrets "default-token-tn6c9" is forbidden: unable to create new content in namespace azuredisk-9103 because it is being terminated
I0904 20:51:09.826930       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9103/default), service account deleted, removing tokens
I0904 20:51:09.827060       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (3.4µs)
I0904 20:51:09.827084       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9103, name default, uid 5d4605ae-d7df-46f7-8284-f583c732c2fe, event type delete
I0904 20:51:09.856344       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9103, estimate: 0, errors: <nil>
I0904 20:51:09.856478       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (2.3µs)
I0904 20:51:09.865252       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9103" (174.160134ms)
I0904 20:51:10.547882       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="89.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46888" resp=200
2022/09/04 20:51:11 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1360.152 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-mfnwh, container manager
STEP: Dumping workload cluster default/capz-ynxxeg logs
Sep  4 20:52:56.293: INFO: Collecting logs for Linux node capz-ynxxeg-control-plane-lcqxk in cluster capz-ynxxeg in namespace default

Sep  4 20:53:56.293: INFO: Collecting boot logs for AzureMachine capz-ynxxeg-control-plane-lcqxk

Failed to get logs for machine capz-ynxxeg-control-plane-swdc6, cluster default/capz-ynxxeg: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 20:53:57.592: INFO: Collecting logs for Linux node capz-ynxxeg-mp-0000000 in cluster capz-ynxxeg in namespace default

Sep  4 20:54:57.595: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-ynxxeg-mp-0

Sep  4 20:54:58.321: INFO: Collecting logs for Linux node capz-ynxxeg-mp-0000001 in cluster capz-ynxxeg in namespace default

Sep  4 20:55:58.323: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-ynxxeg-mp-0

Failed to get logs for machine pool capz-ynxxeg-mp-0, cluster default/capz-ynxxeg: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-ynxxeg kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-tjddb, container calico-kube-controllers
STEP: Fetching kube-system pod logs took 1.260035929s
STEP: Dumping workload cluster default/capz-ynxxeg Azure activity log
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-ynxxeg-control-plane-lcqxk
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-sfkpz
... skipping 41 lines ...