This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-04 20:09
Elapsed48m43s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 627 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 137 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-l9y77r-kubeconfig; do sleep 1; done"
capz-l9y77r-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-l9y77r-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-l9y77r-control-plane-kvxcv   NotReady   control-plane,master   2s    v1.22.14-rc.0.3+b89409c45e0dcb
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-l9y77r-control-plane-kvxcv condition met
node/capz-l9y77r-md-0-qlvdg condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:26:45.367: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-lpnkf" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  4 20:26:45.424: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 57.806564ms
Sep  4 20:26:47.484: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117220122s
Sep  4 20:26:49.543: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176791963s
Sep  4 20:26:51.602: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.235380121s
Sep  4 20:26:53.661: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.294027961s
Sep  4 20:26:55.720: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.353132493s
Sep  4 20:26:57.779: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 12.412506542s
Sep  4 20:26:59.838: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.47091631s
Sep  4 20:27:01.901: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.53455009s
Sep  4 20:27:03.966: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.599113258s
Sep  4 20:27:06.033: INFO: Pod "azuredisk-volume-tester-lpnkf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.666823906s
STEP: Saw pod success
Sep  4 20:27:06.034: INFO: Pod "azuredisk-volume-tester-lpnkf" satisfied condition "Succeeded or Failed"
Sep  4 20:27:06.034: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-lpnkf"
Sep  4 20:27:06.105: INFO: Pod azuredisk-volume-tester-lpnkf has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-lpnkf in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:27:06.292: INFO: deleting PVC "azuredisk-8081"/"pvc-8cnsq"
Sep  4 20:27:06.292: INFO: Deleting PersistentVolumeClaim "pvc-8cnsq"
STEP: waiting for claim's PV "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" to be deleted
Sep  4 20:27:06.351: INFO: Waiting up to 10m0s for PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 to get deleted
Sep  4 20:27:06.409: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (58.038608ms)
Sep  4 20:27:11.468: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (5.117123157s)
Sep  4 20:27:16.530: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (10.179267144s)
Sep  4 20:27:21.593: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (15.241925921s)
Sep  4 20:27:26.654: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (20.302758208s)
Sep  4 20:27:31.713: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (25.361877958s)
Sep  4 20:27:36.772: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (30.420785216s)
Sep  4 20:27:41.831: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 found and phase=Failed (35.479629068s)
Sep  4 20:27:46.892: INFO: PersistentVolume pvc-b517843f-1fc1-4ef9-913b-211e2755d952 was removed
Sep  4 20:27:46.892: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  4 20:27:46.950: INFO: Claim "azuredisk-8081" in namespace "pvc-8cnsq" doesn't exist in the system
Sep  4 20:27:46.950: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-hwxmq
Sep  4 20:27:47.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  4 20:28:01.937: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-64npl"
Sep  4 20:28:01.997: INFO: Error getting logs for pod azuredisk-volume-tester-64npl: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-64npl)
STEP: Deleting pod azuredisk-volume-tester-64npl in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:28:02.173: INFO: deleting PVC "azuredisk-5466"/"pvc-l9s8f"
Sep  4 20:28:02.173: INFO: Deleting PersistentVolumeClaim "pvc-l9s8f"
STEP: waiting for claim's PV "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" to be deleted
... skipping 18 lines ...
Sep  4 20:29:28.331: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Bound (1m26.098226054s)
Sep  4 20:29:33.394: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Bound (1m31.160824149s)
Sep  4 20:29:38.457: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Bound (1m36.223466687s)
Sep  4 20:29:43.516: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Bound (1m41.282973817s)
Sep  4 20:29:48.576: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Bound (1m46.342668888s)
Sep  4 20:29:53.638: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Bound (1m51.404775105s)
Sep  4 20:29:58.700: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (1m56.467259524s)
Sep  4 20:30:03.760: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m1.52655327s)
Sep  4 20:30:08.819: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m6.586056658s)
Sep  4 20:30:13.880: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m11.646442735s)
Sep  4 20:30:18.942: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m16.708547548s)
Sep  4 20:30:24.003: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m21.769726284s)
Sep  4 20:30:29.066: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m26.832441213s)
Sep  4 20:30:34.125: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m31.891949458s)
Sep  4 20:30:39.186: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee found and phase=Failed (2m36.95258805s)
Sep  4 20:30:44.248: INFO: PersistentVolume pvc-bfb1295c-5bba-4952-b819-cc7a39642eee was removed
Sep  4 20:30:44.248: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  4 20:30:44.306: INFO: Claim "azuredisk-5466" in namespace "pvc-l9s8f" doesn't exist in the system
Sep  4 20:30:44.307: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-pp7jg
Sep  4 20:30:44.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:30:45.451: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-mrq5f" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  4 20:30:45.512: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 60.977438ms
Sep  4 20:30:47.571: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120457002s
Sep  4 20:30:49.630: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179389001s
Sep  4 20:30:51.692: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.241076453s
Sep  4 20:30:53.751: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.300731693s
Sep  4 20:30:55.811: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.36050431s
Sep  4 20:30:57.873: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.422074351s
Sep  4 20:30:59.935: INFO: Pod "azuredisk-volume-tester-mrq5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.484390364s
STEP: Saw pod success
Sep  4 20:30:59.935: INFO: Pod "azuredisk-volume-tester-mrq5f" satisfied condition "Succeeded or Failed"
Sep  4 20:30:59.935: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-mrq5f"
Sep  4 20:31:00.002: INFO: Pod azuredisk-volume-tester-mrq5f has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-mrq5f in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:31:00.186: INFO: deleting PVC "azuredisk-2790"/"pvc-c95vm"
Sep  4 20:31:00.186: INFO: Deleting PersistentVolumeClaim "pvc-c95vm"
STEP: waiting for claim's PV "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" to be deleted
Sep  4 20:31:00.245: INFO: Waiting up to 10m0s for PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 to get deleted
Sep  4 20:31:00.303: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Released (57.781126ms)
Sep  4 20:31:05.362: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (5.11673321s)
Sep  4 20:31:10.425: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (10.179653571s)
Sep  4 20:31:15.486: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (15.240478241s)
Sep  4 20:31:20.546: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (20.301202511s)
Sep  4 20:31:25.606: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (25.360646971s)
Sep  4 20:31:30.666: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (30.421091302s)
Sep  4 20:31:35.727: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (35.482142925s)
Sep  4 20:31:40.790: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 found and phase=Failed (40.544945521s)
Sep  4 20:31:45.849: INFO: PersistentVolume pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 was removed
Sep  4 20:31:45.849: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  4 20:31:45.907: INFO: Claim "azuredisk-2790" in namespace "pvc-c95vm" doesn't exist in the system
Sep  4 20:31:45.907: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-9zlb9
Sep  4 20:31:45.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  4 20:31:47.047: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-d9zkv" in namespace "azuredisk-5356" to be "Error status code"
Sep  4 20:31:47.105: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 57.990909ms
Sep  4 20:31:49.165: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117430294s
Sep  4 20:31:51.225: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177783974s
Sep  4 20:31:53.284: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237270853s
Sep  4 20:31:55.343: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.295897452s
Sep  4 20:31:57.403: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.355636067s
Sep  4 20:31:59.462: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.41503981s
Sep  4 20:32:01.522: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.474491025s
Sep  4 20:32:03.584: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.536462318s
Sep  4 20:32:05.646: INFO: Pod "azuredisk-volume-tester-d9zkv": Phase="Failed", Reason="", readiness=false. Elapsed: 18.598425378s
STEP: Saw pod failure
Sep  4 20:32:05.646: INFO: Pod "azuredisk-volume-tester-d9zkv" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  4 20:32:05.705: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-d9zkv"
Sep  4 20:32:05.765: INFO: Pod azuredisk-volume-tester-d9zkv has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-d9zkv in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:32:05.949: INFO: deleting PVC "azuredisk-5356"/"pvc-kshwn"
Sep  4 20:32:05.949: INFO: Deleting PersistentVolumeClaim "pvc-kshwn"
STEP: waiting for claim's PV "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" to be deleted
Sep  4 20:32:06.008: INFO: Waiting up to 10m0s for PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 to get deleted
Sep  4 20:32:06.066: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (58.360225ms)
Sep  4 20:32:11.129: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (5.121258847s)
Sep  4 20:32:16.189: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (10.180944581s)
Sep  4 20:32:21.248: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (15.240016922s)
Sep  4 20:32:26.310: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (20.302454427s)
Sep  4 20:32:31.371: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (25.363656272s)
Sep  4 20:32:36.434: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (30.426065413s)
Sep  4 20:32:41.494: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 found and phase=Failed (35.485897482s)
Sep  4 20:32:46.553: INFO: PersistentVolume pvc-a3f21a61-c478-4872-afdd-a63ef0705782 was removed
Sep  4 20:32:46.554: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  4 20:32:46.612: INFO: Claim "azuredisk-5356" in namespace "pvc-kshwn" doesn't exist in the system
Sep  4 20:32:46.612: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-s6rpj
Sep  4 20:32:46.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 53 lines ...
Sep  4 20:33:37.893: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Bound (5.117282604s)
Sep  4 20:33:42.954: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Bound (10.1778814s)
Sep  4 20:33:48.013: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Bound (15.237300603s)
Sep  4 20:33:53.073: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Bound (20.296959785s)
Sep  4 20:33:58.132: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Bound (25.355900249s)
Sep  4 20:34:03.195: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Bound (30.418602617s)
Sep  4 20:34:08.254: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (35.478128481s)
Sep  4 20:34:13.313: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (40.536959766s)
Sep  4 20:34:18.374: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (45.598159323s)
Sep  4 20:34:23.433: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (50.65733061s)
Sep  4 20:34:28.493: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (55.716459598s)
Sep  4 20:34:33.551: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (1m0.775368317s)
Sep  4 20:34:38.615: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 found and phase=Failed (1m5.838587956s)
Sep  4 20:34:43.675: INFO: PersistentVolume pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 was removed
Sep  4 20:34:43.675: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:34:43.734: INFO: Claim "azuredisk-5194" in namespace "pvc-nngt5" doesn't exist in the system
Sep  4 20:34:43.734: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-6gd29
Sep  4 20:34:43.794: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-5b4pg"
Sep  4 20:34:43.854: INFO: Pod azuredisk-volume-tester-5b4pg has the following logs: 
... skipping 8 lines ...
Sep  4 20:34:49.213: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Bound (5.120784035s)
Sep  4 20:34:54.276: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Bound (10.183959305s)
Sep  4 20:34:59.338: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Bound (15.246242606s)
Sep  4 20:35:04.401: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Bound (20.30916812s)
Sep  4 20:35:09.464: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Bound (25.371932328s)
Sep  4 20:35:14.527: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Bound (30.435521716s)
Sep  4 20:35:19.588: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Failed (35.495753412s)
Sep  4 20:35:24.650: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Failed (40.558672991s)
Sep  4 20:35:29.713: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Failed (45.620896381s)
Sep  4 20:35:34.774: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Failed (50.68198536s)
Sep  4 20:35:39.837: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 found and phase=Failed (55.7449105s)
Sep  4 20:35:44.897: INFO: PersistentVolume pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 was removed
Sep  4 20:35:44.897: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:35:44.955: INFO: Claim "azuredisk-5194" in namespace "pvc-b6r46" doesn't exist in the system
Sep  4 20:35:44.955: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-xxc52
Sep  4 20:35:45.015: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-hb9kv"
Sep  4 20:35:45.081: INFO: Pod azuredisk-volume-tester-hb9kv has the following logs: 
... skipping 8 lines ...
Sep  4 20:35:50.443: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Bound (5.119044015s)
Sep  4 20:35:55.503: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Bound (10.179591677s)
Sep  4 20:36:00.562: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Bound (15.238734666s)
Sep  4 20:36:05.625: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Bound (20.301358264s)
Sep  4 20:36:10.689: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Bound (25.36506527s)
Sep  4 20:36:15.754: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Bound (30.430070469s)
Sep  4 20:36:20.813: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Failed (35.489691654s)
Sep  4 20:36:25.876: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Failed (40.552784852s)
Sep  4 20:36:30.939: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Failed (45.615947022s)
Sep  4 20:36:36.003: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Failed (50.679162564s)
Sep  4 20:36:41.062: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 found and phase=Failed (55.738862287s)
Sep  4 20:36:46.125: INFO: PersistentVolume pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 was removed
Sep  4 20:36:46.125: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:36:46.184: INFO: Claim "azuredisk-5194" in namespace "pvc-ztthq" doesn't exist in the system
Sep  4 20:36:46.184: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-lnjnt
Sep  4 20:36:46.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 57 lines ...
Sep  4 20:38:09.502: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Bound (5.121741988s)
Sep  4 20:38:14.563: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Bound (10.182575163s)
Sep  4 20:38:19.626: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Bound (15.246184724s)
Sep  4 20:38:24.687: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Bound (20.306849491s)
Sep  4 20:38:29.748: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Bound (25.367994824s)
Sep  4 20:38:34.811: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Bound (30.431192811s)
Sep  4 20:38:39.872: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (35.492129273s)
Sep  4 20:38:44.936: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (40.555748749s)
Sep  4 20:38:49.997: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (45.61689738s)
Sep  4 20:38:55.056: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (50.675958578s)
Sep  4 20:39:00.118: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (55.737877582s)
Sep  4 20:39:05.182: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (1m0.802202889s)
Sep  4 20:39:10.246: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 found and phase=Failed (1m5.865344286s)
Sep  4 20:39:15.309: INFO: PersistentVolume pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 was removed
Sep  4 20:39:15.309: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  4 20:39:15.368: INFO: Claim "azuredisk-1353" in namespace "pvc-kgxxq" doesn't exist in the system
Sep  4 20:39:15.368: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-bh7kw
Sep  4 20:39:15.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:39:35.637: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9zc6t" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  4 20:39:35.699: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 62.109543ms
Sep  4 20:39:37.758: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121455882s
Sep  4 20:39:39.822: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185109855s
Sep  4 20:39:41.886: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.249568077s
Sep  4 20:39:43.950: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31327195s
Sep  4 20:39:46.014: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.377450299s
... skipping 6 lines ...
Sep  4 20:40:00.454: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 24.81785216s
Sep  4 20:40:02.518: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 26.88116598s
Sep  4 20:40:04.581: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 28.944210088s
Sep  4 20:40:06.644: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Pending", Reason="", readiness=false. Elapsed: 31.007389268s
Sep  4 20:40:08.706: INFO: Pod "azuredisk-volume-tester-9zc6t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.069751558s
STEP: Saw pod success
Sep  4 20:40:08.706: INFO: Pod "azuredisk-volume-tester-9zc6t" satisfied condition "Succeeded or Failed"
Sep  4 20:40:08.706: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-9zc6t"
Sep  4 20:40:08.784: INFO: Pod azuredisk-volume-tester-9zc6t has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-9zc6t in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:40:08.971: INFO: deleting PVC "azuredisk-59"/"pvc-w62mw"
Sep  4 20:40:08.971: INFO: Deleting PersistentVolumeClaim "pvc-w62mw"
STEP: waiting for claim's PV "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" to be deleted
Sep  4 20:40:09.030: INFO: Waiting up to 10m0s for PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 to get deleted
Sep  4 20:40:09.089: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Released (58.09525ms)
Sep  4 20:40:14.151: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (5.121051532s)
Sep  4 20:40:19.215: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (10.184434104s)
Sep  4 20:40:24.274: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (15.243494858s)
Sep  4 20:40:29.334: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (20.303330067s)
Sep  4 20:40:34.395: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (25.364578499s)
Sep  4 20:40:39.456: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (30.425448084s)
Sep  4 20:40:44.516: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (35.485973019s)
Sep  4 20:40:49.581: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (40.55081927s)
Sep  4 20:40:54.642: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (45.611646187s)
Sep  4 20:40:59.703: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (50.672908627s)
Sep  4 20:41:04.766: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (55.735591642s)
Sep  4 20:41:09.826: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 found and phase=Failed (1m0.796018043s)
Sep  4 20:41:14.885: INFO: PersistentVolume pvc-6b1920e7-40e5-4080-a018-83519d0670a7 was removed
Sep  4 20:41:14.885: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  4 20:41:14.944: INFO: Claim "azuredisk-59" in namespace "pvc-w62mw" doesn't exist in the system
Sep  4 20:41:14.944: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-lw2hw
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:41:37.182: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rjfnt" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  4 20:41:37.242: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 59.925046ms
Sep  4 20:41:39.301: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118654501s
Sep  4 20:41:41.362: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180156425s
Sep  4 20:41:43.422: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.239919688s
Sep  4 20:41:45.482: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 8.299823325s
Sep  4 20:41:47.542: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 10.360040079s
Sep  4 20:41:49.602: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 12.420023403s
Sep  4 20:41:51.662: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 14.479633819s
Sep  4 20:41:53.725: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Pending", Reason="", readiness=false. Elapsed: 16.542487142s
Sep  4 20:41:55.788: INFO: Pod "azuredisk-volume-tester-rjfnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.605432862s
STEP: Saw pod success
Sep  4 20:41:55.788: INFO: Pod "azuredisk-volume-tester-rjfnt" satisfied condition "Succeeded or Failed"
Sep  4 20:41:55.788: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-rjfnt"
Sep  4 20:41:55.855: INFO: Pod azuredisk-volume-tester-rjfnt has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.049913 seconds, 2.0GB/s
hello world

STEP: Deleting pod azuredisk-volume-tester-rjfnt in namespace azuredisk-2546
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:41:56.041: INFO: deleting PVC "azuredisk-2546"/"pvc-cfjfq"
Sep  4 20:41:56.041: INFO: Deleting PersistentVolumeClaim "pvc-cfjfq"
STEP: waiting for claim's PV "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" to be deleted
Sep  4 20:41:56.101: INFO: Waiting up to 10m0s for PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 to get deleted
Sep  4 20:41:56.161: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (60.7509ms)
Sep  4 20:42:01.222: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (5.121088219s)
Sep  4 20:42:06.287: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (10.186177579s)
Sep  4 20:42:11.350: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (15.249142191s)
Sep  4 20:42:16.413: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (20.311940878s)
Sep  4 20:42:21.475: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (25.373938387s)
Sep  4 20:42:26.534: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 found and phase=Failed (30.433636648s)
Sep  4 20:42:31.595: INFO: PersistentVolume pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 was removed
Sep  4 20:42:31.595: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  4 20:42:31.653: INFO: Claim "azuredisk-2546" in namespace "pvc-cfjfq" doesn't exist in the system
Sep  4 20:42:31.653: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-7l7tn
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:42:45.509: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-26xf4" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  4 20:42:45.570: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 60.668102ms
Sep  4 20:42:47.629: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11945356s
Sep  4 20:42:49.693: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.183254898s
Sep  4 20:42:51.755: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245206627s
Sep  4 20:42:53.817: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.307856698s
Sep  4 20:42:55.881: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.371179304s
... skipping 5 lines ...
Sep  4 20:43:08.261: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.751720267s
Sep  4 20:43:10.324: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.814577865s
Sep  4 20:43:12.386: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.876317703s
Sep  4 20:43:14.448: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Pending", Reason="", readiness=false. Elapsed: 28.938240356s
Sep  4 20:43:16.511: INFO: Pod "azuredisk-volume-tester-26xf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.001778003s
STEP: Saw pod success
Sep  4 20:43:16.511: INFO: Pod "azuredisk-volume-tester-26xf4" satisfied condition "Succeeded or Failed"
Sep  4 20:43:16.511: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-26xf4"
Sep  4 20:43:16.581: INFO: Pod azuredisk-volume-tester-26xf4 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-26xf4 in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:43:16.765: INFO: deleting PVC "azuredisk-8582"/"pvc-fwxnk"
Sep  4 20:43:16.765: INFO: Deleting PersistentVolumeClaim "pvc-fwxnk"
STEP: waiting for claim's PV "pvc-633fd388-d680-4db1-b72e-87378148b0aa" to be deleted
Sep  4 20:43:16.828: INFO: Waiting up to 10m0s for PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa to get deleted
Sep  4 20:43:16.888: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (60.278999ms)
Sep  4 20:43:21.950: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (5.122804866s)
Sep  4 20:43:27.009: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (10.181690271s)
Sep  4 20:43:32.070: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (15.242155227s)
Sep  4 20:43:37.132: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (20.304753295s)
Sep  4 20:43:42.194: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (25.366394663s)
Sep  4 20:43:47.253: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (30.425390369s)
Sep  4 20:43:52.313: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (35.485800112s)
Sep  4 20:43:57.375: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa found and phase=Failed (40.547746203s)
Sep  4 20:44:02.438: INFO: PersistentVolume pvc-633fd388-d680-4db1-b72e-87378148b0aa was removed
Sep  4 20:44:02.438: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  4 20:44:02.496: INFO: Claim "azuredisk-8582" in namespace "pvc-fwxnk" doesn't exist in the system
Sep  4 20:44:02.496: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-6r4gh
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:44:02.673: INFO: deleting PVC "azuredisk-8582"/"pvc-8v7d2"
Sep  4 20:44:02.673: INFO: Deleting PersistentVolumeClaim "pvc-8v7d2"
STEP: waiting for claim's PV "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" to be deleted
Sep  4 20:44:02.732: INFO: Waiting up to 10m0s for PersistentVolume pvc-017e56d7-c127-434c-83b8-005cf612b8fe to get deleted
Sep  4 20:44:02.793: INFO: PersistentVolume pvc-017e56d7-c127-434c-83b8-005cf612b8fe found and phase=Failed (60.683478ms)
Sep  4 20:44:07.852: INFO: PersistentVolume pvc-017e56d7-c127-434c-83b8-005cf612b8fe found and phase=Failed (5.119593116s)
Sep  4 20:44:12.910: INFO: PersistentVolume pvc-017e56d7-c127-434c-83b8-005cf612b8fe was removed
Sep  4 20:44:12.910: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  4 20:44:12.968: INFO: Claim "azuredisk-8582" in namespace "pvc-8v7d2" doesn't exist in the system
Sep  4 20:44:12.968: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-8mx7t
STEP: validating provisioned PV
STEP: checking the PV
... skipping 390 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  4 20:47:19.392: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.548 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 247 lines ...
I0904 20:22:12.938689       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-09-04 20:15:08 +0000 UTC to 2032-09-01 20:20:08 +0000 UTC (now=2022-09-04 20:22:12.938653047 +0000 UTC))"
I0904 20:22:12.939023       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662322931\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662322931\" (2022-09-04 19:22:11 +0000 UTC to 2023-09-04 19:22:11 +0000 UTC (now=2022-09-04 20:22:12.938991649 +0000 UTC))"
I0904 20:22:12.939328       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662322932\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662322932\" (2022-09-04 19:22:11 +0000 UTC to 2023-09-04 19:22:11 +0000 UTC (now=2022-09-04 20:22:12.939301752 +0000 UTC))"
I0904 20:22:12.939414       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0904 20:22:12.939490       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0904 20:22:12.939976       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0904 20:22:14.447278       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 20:22:14.447369       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0904 20:22:17.643339       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0904 20:22:17.644398       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-l9y77r-control-plane-kvxcv_2f9adec5-4726-4c81-bcaf-56a36d8f4bac became leader"
W0904 20:22:17.665310       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0904 20:22:17.665872       1 azure_auth.go:232] Using AzurePublicCloud environment
I0904 20:22:17.665984       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0904 20:22:17.666056       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0904 20:22:17.667427       1 reflector.go:219] Starting reflector *v1.ServiceAccount (21h47m9.806581553s) from k8s.io/client-go/informers/factory.go:134
I0904 20:22:17.667574       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0904 20:22:17.667982       1 reflector.go:219] Starting reflector *v1.Node (21h47m9.806581553s) from k8s.io/client-go/informers/factory.go:134
I0904 20:22:17.668144       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0904 20:22:17.671108       1 reflector.go:219] Starting reflector *v1.Secret (21h47m9.806581553s) from k8s.io/client-go/informers/factory.go:134
I0904 20:22:17.671163       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0904 20:22:17.685299       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 20:22:17.685365       1 controllermanager.go:562] Starting "disruption"
I0904 20:22:17.708788       1 controllermanager.go:577] Started "disruption"
I0904 20:22:17.708811       1 controllermanager.go:562] Starting "persistentvolume-binder"
I0904 20:22:17.708953       1 disruption.go:363] Starting disruption controller
I0904 20:22:17.709031       1 shared_informer.go:240] Waiting for caches to sync for disruption
I0904 20:22:17.714124       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/host-path"
... skipping 8 lines ...
I0904 20:22:17.714365       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 20:22:17.714382       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0904 20:22:17.714393       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0904 20:22:17.714417       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 20:22:17.714434       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0904 20:22:17.714466       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0904 20:22:17.714513       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 20:22:17.714525       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 20:22:17.714586       1 controllermanager.go:577] Started "persistentvolume-binder"
I0904 20:22:17.714600       1 controllermanager.go:562] Starting "ephemeral-volume"
I0904 20:22:17.714737       1 pv_controller_base.go:308] Starting persistent volume controller
I0904 20:22:17.714749       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0904 20:22:17.720786       1 controllermanager.go:577] Started "ephemeral-volume"
... skipping 146 lines ...
I0904 20:22:20.147946       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 20:22:20.147972       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 20:22:20.148015       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0904 20:22:20.148029       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0904 20:22:20.148049       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0904 20:22:20.148090       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 20:22:20.148117       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 20:22:20.148131       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 20:22:20.148369       1 controllermanager.go:577] Started "attachdetach"
I0904 20:22:20.148386       1 controllermanager.go:562] Starting "csrcleaner"
I0904 20:22:20.148435       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 20:22:20.148530       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0904 20:22:20.148612       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-control-plane-kvxcv"
W0904 20:22:20.148656       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-l9y77r-control-plane-kvxcv" does not exist
I0904 20:22:20.196573       1 controllermanager.go:577] Started "csrcleaner"
I0904 20:22:20.196596       1 controllermanager.go:562] Starting "pv-protection"
I0904 20:22:20.196631       1 cleaner.go:82] Starting CSR cleaner controller
I0904 20:22:20.347332       1 controllermanager.go:577] Started "pv-protection"
I0904 20:22:20.347513       1 controllermanager.go:562] Starting "ttl-after-finished"
I0904 20:22:20.347449       1 pv_protection_controller.go:83] Starting PV protection controller
... skipping 431 lines ...
I0904 20:22:22.804711       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0bd601faff6d7e8, ext:11892200047, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:22.805021       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-78fcd69978" need=2 creating=2
I0904 20:22:22.805704       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0904 20:22:22.814445       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0904 20:22:22.815237       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 20:22:22.80549924 +0000 UTC m=+11.892992907 - now: 2022-09-04 20:22:22.815229241 +0000 UTC m=+11.902722908]
I0904 20:22:22.818416       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="586.470823ms"
I0904 20:22:22.818446       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:22:22.818619       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:22:22.818482774 +0000 UTC m=+11.905976441"
I0904 20:22:22.819209       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 20:22:22 +0000 UTC - now: 2022-09-04 20:22:22.819204237 +0000 UTC m=+11.906697904]
I0904 20:22:22.822781       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0904 20:22:22.823104       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="4.607264ms"
I0904 20:22:22.823145       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:22:22.823126736 +0000 UTC m=+11.910620403"
I0904 20:22:22.823328       1 shared_informer.go:270] caches populated
... skipping 112 lines ...
I0904 20:22:23.902281       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-8c95fb79b"
I0904 20:22:23.902472       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (18.031151ms)
I0904 20:22:23.902495       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bd601ff4b806c9, ext:12971969360, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:23.902555       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-8c95fb79b, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 20:22:23.906678       1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (15.518013ms)
I0904 20:22:23.909552       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="29.050277ms"
I0904 20:22:23.909583       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:22:23.909619       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-04 20:22:23.909598268 +0000 UTC m=+12.997092035"
I0904 20:22:23.910233       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-04 20:22:23 +0000 UTC - now: 2022-09-04 20:22:23.910227752 +0000 UTC m=+12.997721519]
I0904 20:22:23.911910       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-8c95fb79b"
I0904 20:22:23.913442       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (10.947627ms)
I0904 20:22:23.913469       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bd601ff4b806c9, ext:12971969360, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:23.913524       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (59.699µs)
... skipping 61 lines ...
I0904 20:22:25.823274       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-9kdn2"
I0904 20:22:25.823308       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-969cf87c4-9kdn2, PodDisruptionBudget controller will avoid syncing.
I0904 20:22:25.823315       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-969cf87c4-9kdn2"
I0904 20:22:25.823532       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-969cf87c4-9kdn2" podUID=8922f380-9043-49d8-b39b-e43638233363
I0904 20:22:25.823721       1 replica_set.go:443] Pod calico-kube-controllers-969cf87c4-9kdn2 updated, objectMeta {Name:calico-kube-controllers-969cf87c4-9kdn2 GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:8922f380-9043-49d8-b39b-e43638233363 ResourceVersion:534 Generation:0 CreationTimestamp:2022-09-04 20:22:25 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:b42877e9-c9dc-457d-8fec-eba77f0a5dfb Controller:0xc0006bcfd7 BlockOwnerDeletion:0xc0006bcfd8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:22:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b42877e9-c9dc-457d-8fec-eba77f0a5dfb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-969cf87c4-9kdn2 GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:8922f380-9043-49d8-b39b-e43638233363 ResourceVersion:538 Generation:0 CreationTimestamp:2022-09-04 20:22:25 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:b42877e9-c9dc-457d-8fec-eba77f0a5dfb Controller:0xc000a8ffd7 BlockOwnerDeletion:0xc000a8ffd8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:22:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b42877e9-c9dc-457d-8fec-eba77f0a5dfb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 20:22:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0904 20:22:25.824230       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="50.576518ms"
I0904 20:22:25.824357       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:22:25.824527       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 20:22:25.824491135 +0000 UTC m=+14.911984802"
I0904 20:22:25.825073       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 20:22:25 +0000 UTC - now: 2022-09-04 20:22:25.825066323 +0000 UTC m=+14.912559990]
I0904 20:22:25.825424       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (43.91446ms)
I0904 20:22:25.825546       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd60206e9820fa, ext:14869215617, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:25.825749       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 20:22:25.842838       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (17.29443ms)
... skipping 288 lines ...
I0904 20:22:52.626792       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6027255b1e80, ext:41714222343, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:52.626826       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6027255c9197, ext:41714317242, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:52.626836       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0904 20:22:52.626859       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0904 20:22:52.626870       1 daemon_controller.go:1102] Updating daemon set status
I0904 20:22:52.626888       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (1.315085ms)
W0904 20:22:52.859221       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 20:22:52.859472       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0904 20:22:52.859513       1 garbagecollector.go:219] reset restmapper
E0904 20:22:52.866675       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0904 20:22:52.874508       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0904 20:22:52.875109       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=caliconodestatuses", kind "crd.projectcalico.org/v1, Kind=CalicoNodeStatus"
I0904 20:22:52.875162       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=ipamblocks", kind "crd.projectcalico.org/v1, Kind=IPAMBlock"
... skipping 208 lines ...
I0904 20:22:56.762750       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0bd601faff6d7e8, ext:11892200047, loc:(*time.Location)(0x751a1a0)}}
I0904 20:22:56.762975       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (230.41µs)
I0904 20:22:56.764874       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="11.284882ms"
I0904 20:22:56.765303       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0904 20:22:56.765471       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:22:56.765450744 +0000 UTC m=+45.852944411"
I0904 20:22:56.766528       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.064145ms"
I0904 20:22:57.208629       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-l9y77r-control-plane-kvxcv transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:22:34 +0000 UTC,LastTransitionTime:2022-09-04 20:22:01 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:22:54 +0000 UTC,LastTransitionTime:2022-09-04 20:22:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:22:57.208727       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-control-plane-kvxcv ReadyCondition updated. Updating timestamp.
I0904 20:22:57.208753       1 node_lifecycle_controller.go:893] Node capz-l9y77r-control-plane-kvxcv is healthy again, removing all taints
I0904 20:22:57.208771       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 20:22:57.681636       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (356.912µs)
I0904 20:23:01.218768       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="50.802µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36328" resp=200
I0904 20:23:02.144985       1 disruption.go:427] updatePod called on pod "calico-node-h8ckx"
... skipping 68 lines ...
I0904 20:23:22.251466       1 gc_controller.go:161] GC'ing orphaned
I0904 20:23:22.251488       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:23:22.353939       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
E0904 20:23:22.493362       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0904 20:23:22.493424       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:23:23.116526       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-48mthv" (9.5µs)
W0904 20:23:23.195302       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 20:23:24.586714       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-control-plane-kvxcv"
I0904 20:23:27.215563       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-control-plane-kvxcv ReadyCondition updated. Updating timestamp.
I0904 20:23:31.220489       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="89.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56728" resp=200
I0904 20:23:35.273447       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-xc678"
I0904 20:23:35.273499       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-xc678, PodDisruptionBudget controller will avoid syncing.
I0904 20:23:35.273505       1 disruption.go:430] No matching pdb for pod "metrics-server-8c95fb79b-xc678"
... skipping 63 lines ...
I0904 20:24:02.250138       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:24:02.250154       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:24:02.250160       1 controller.go:731] It took 4e-05 seconds to finish nodeSyncInternal
I0904 20:24:02.252959       1 gc_controller.go:161] GC'ing orphaned
I0904 20:24:02.252975       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:24:07.043136       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
W0904 20:24:07.043183       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-l9y77r-md-0-x4pd8" does not exist
I0904 20:24:07.043293       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l9y77r-md-0-x4pd8}
I0904 20:24:07.043312       1 taint_manager.go:440] "Updating known taints on node" node="capz-l9y77r-md-0-x4pd8" taints=[]
I0904 20:24:07.043384       1 controller.go:693] Ignoring node capz-l9y77r-md-0-x4pd8 with Ready condition status False
I0904 20:24:07.043403       1 controller.go:272] Triggering nodeSync
I0904 20:24:07.043411       1 controller.go:291] nodeSync has been triggered
I0904 20:24:07.043504       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
... skipping 282 lines ...
I0904 20:24:27.255442       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.431818ms)
I0904 20:24:27.791007       1 azure_vmss.go:369] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), assuming it is managed by availability set: not a vmss instance
I0904 20:24:27.791101       1 azure_vmss.go:369] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), assuming it is managed by availability set: not a vmss instance
I0904 20:24:27.791131       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-l9y77r-md-0-x4pd8"
I0904 20:24:27.791143       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-l9y77r-md-0-x4pd8"
I0904 20:24:29.497062       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
W0904 20:24:29.497094       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-l9y77r-md-0-qlvdg" does not exist
I0904 20:24:29.497167       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l9y77r-md-0-qlvdg}
I0904 20:24:29.497189       1 taint_manager.go:440] "Updating known taints on node" node="capz-l9y77r-md-0-qlvdg" taints=[]
I0904 20:24:29.497263       1 controller.go:693] Ignoring node capz-l9y77r-md-0-x4pd8 with Ready condition status False
I0904 20:24:29.497332       1 controller.go:693] Ignoring node capz-l9y77r-md-0-qlvdg with Ready condition status False
I0904 20:24:29.497378       1 controller.go:272] Triggering nodeSync
I0904 20:24:29.497432       1 controller.go:291] nodeSync has been triggered
... skipping 175 lines ...
I0904 20:24:37.104648       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:24:37.104654       1 controller.go:760] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 20:24:37.104696       1 controller.go:731] It took 6.6801e-05 seconds to finish nodeSyncInternal
I0904 20:24:37.117527       1 controller_utils.go:221] Made sure that Node capz-l9y77r-md-0-x4pd8 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0904 20:24:37.119102       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
I0904 20:24:37.222503       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:24:37.225711       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-l9y77r-md-0-x4pd8 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:24:17 +0000 UTC,LastTransitionTime:2022-09-04 20:24:07 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:24:37 +0000 UTC,LastTransitionTime:2022-09-04 20:24:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:24:37.225781       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-x4pd8 ReadyCondition updated. Updating timestamp.
I0904 20:24:37.252852       1 node_lifecycle_controller.go:893] Node capz-l9y77r-md-0-x4pd8 is healthy again, removing all taints
I0904 20:24:37.252927       1 node_lifecycle_controller.go:1214] Controller detected that zone westus3::0 is now in state Normal.
I0904 20:24:37.253498       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l9y77r-md-0-x4pd8}
I0904 20:24:37.253635       1 taint_manager.go:440] "Updating known taints on node" node="capz-l9y77r-md-0-x4pd8" taints=[]
I0904 20:24:37.253784       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-l9y77r-md-0-x4pd8"
... skipping 149 lines ...
I0904 20:24:59.805501       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0904 20:24:59.805643       1 daemon_controller.go:1102] Updating daemon set status
I0904 20:24:59.805835       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.68952ms)
I0904 20:25:01.218359       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.7µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:60948" resp=200
I0904 20:25:02.254225       1 gc_controller.go:161] GC'ing orphaned
I0904 20:25:02.254249       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:25:02.259466       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-l9y77r-md-0-qlvdg transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:24:39 +0000 UTC,LastTransitionTime:2022-09-04 20:24:29 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:24:59 +0000 UTC,LastTransitionTime:2022-09-04 20:24:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:25:02.259511       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-qlvdg ReadyCondition updated. Updating timestamp.
I0904 20:25:02.271642       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l9y77r-md-0-qlvdg}
I0904 20:25:02.271671       1 taint_manager.go:440] "Updating known taints on node" node="capz-l9y77r-md-0-qlvdg" taints=[]
I0904 20:25:02.271688       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-l9y77r-md-0-qlvdg"
I0904 20:25:02.272537       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:25:02.272755       1 node_lifecycle_controller.go:893] Node capz-l9y77r-md-0-qlvdg is healthy again, removing all taints
... skipping 306 lines ...
I0904 20:27:06.344077       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: claim azuredisk-8081/pvc-8cnsq not found
I0904 20:27:06.344099       1 pv_controller.go:1108] reclaimVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: policy is Delete
I0904 20:27:06.344112       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]]
I0904 20:27:06.344119       1 pv_controller.go:1763] operation "delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]" is already running, skipping
I0904 20:27:06.350593       1 pv_controller.go:1340] isVolumeReleased[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is released
I0904 20:27:06.350713       1 pv_controller.go:1404] doDeleteVolume [pvc-b517843f-1fc1-4ef9-913b-211e2755d952]
I0904 20:27:06.372764       1 pv_controller.go:1259] deletion of volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:27:06.372881       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: set phase Failed
I0904 20:27:06.372938       1 pv_controller.go:858] updating PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: set phase Failed
I0904 20:27:06.376625       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" with version 1253
I0904 20:27:06.376649       1 pv_controller.go:879] volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" entered phase "Failed"
I0904 20:27:06.376659       1 pv_controller.go:901] volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
E0904 20:27:06.376781       1 goroutinemap.go:150] Operation for "delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]" failed. No retries permitted until 2022-09-04 20:27:06.876761055 +0000 UTC m=+295.964254822 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:27:06.376906       1 event.go:291] "Event occurred" object="pvc-b517843f-1fc1-4ef9-913b-211e2755d952" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted"
I0904 20:27:06.377213       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" with version 1253
I0904 20:27:06.377400       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase: Failed, bound to: "azuredisk-8081/pvc-8cnsq (uid: b517843f-1fc1-4ef9-913b-211e2755d952)", boundByController: true
I0904 20:27:06.377548       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is bound to claim azuredisk-8081/pvc-8cnsq
I0904 20:27:06.377681       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: claim azuredisk-8081/pvc-8cnsq not found
I0904 20:27:06.377922       1 pv_controller.go:1108] reclaimVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: policy is Delete
I0904 20:27:06.378057       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]]
I0904 20:27:06.378181       1 pv_controller.go:1765] operation "delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]" postponed due to exponential backoff
I0904 20:27:06.377343       1 pv_protection_controller.go:205] Got event on PV pvc-b517843f-1fc1-4ef9-913b-211e2755d952
... skipping 3 lines ...
I0904 20:27:07.219169       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
I0904 20:27:07.219306       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952 to the node "capz-l9y77r-md-0-x4pd8" mounted false
I0904 20:27:07.219292       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952") on node "capz-l9y77r-md-0-x4pd8" 
I0904 20:27:07.221482       1 operation_generator.go:1599] Verified volume is safe to detach for volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952") on node "capz-l9y77r-md-0-x4pd8" 
I0904 20:27:07.231586       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:27:07.231747       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" with version 1253
I0904 20:27:07.231799       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase: Failed, bound to: "azuredisk-8081/pvc-8cnsq (uid: b517843f-1fc1-4ef9-913b-211e2755d952)", boundByController: true
I0904 20:27:07.231825       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is bound to claim azuredisk-8081/pvc-8cnsq
I0904 20:27:07.231844       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: claim azuredisk-8081/pvc-8cnsq not found
I0904 20:27:07.231853       1 pv_controller.go:1108] reclaimVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: policy is Delete
I0904 20:27:07.231868       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]]
I0904 20:27:07.231950       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b517843f-1fc1-4ef9-913b-211e2755d952] started
I0904 20:27:07.233471       1 pv_controller.go:1340] isVolumeReleased[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is released
I0904 20:27:07.233487       1 pv_controller.go:1404] doDeleteVolume [pvc-b517843f-1fc1-4ef9-913b-211e2755d952]
I0904 20:27:07.263657       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952 from node "capz-l9y77r-md-0-x4pd8"
I0904 20:27:07.263690       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952"
I0904 20:27:07.263704       1 azure_controller_standard.go:166] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-x4pd8) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952)
I0904 20:27:07.289415       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-x4pd8 ReadyCondition updated. Updating timestamp.
I0904 20:27:07.318643       1 pv_controller.go:1259] deletion of volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:27:07.318701       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: set phase Failed
I0904 20:27:07.318731       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase Failed already set
E0904 20:27:07.318774       1 goroutinemap.go:150] Operation for "delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]" failed. No retries permitted until 2022-09-04 20:27:08.318755553 +0000 UTC m=+297.406249220 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:27:07.362940       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:27:11.219128       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.8µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:42822" resp=200
I0904 20:27:21.218166       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="67.8µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45760" resp=200
I0904 20:27:22.170495       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:27:22.174628       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:27:22.175692       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:27:22.231973       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:27:22.232093       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" with version 1253
I0904 20:27:22.232131       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase: Failed, bound to: "azuredisk-8081/pvc-8cnsq (uid: b517843f-1fc1-4ef9-913b-211e2755d952)", boundByController: true
I0904 20:27:22.232235       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is bound to claim azuredisk-8081/pvc-8cnsq
I0904 20:27:22.232287       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: claim azuredisk-8081/pvc-8cnsq not found
I0904 20:27:22.232307       1 pv_controller.go:1108] reclaimVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: policy is Delete
I0904 20:27:22.232322       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]]
I0904 20:27:22.232370       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b517843f-1fc1-4ef9-913b-211e2755d952] started
I0904 20:27:22.244172       1 pv_controller.go:1340] isVolumeReleased[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is released
I0904 20:27:22.244197       1 pv_controller.go:1404] doDeleteVolume [pvc-b517843f-1fc1-4ef9-913b-211e2755d952]
I0904 20:27:22.244230       1 pv_controller.go:1259] deletion of volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) since it's in attaching or detaching state
I0904 20:27:22.244294       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: set phase Failed
I0904 20:27:22.244402       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase Failed already set
E0904 20:27:22.244512       1 goroutinemap.go:150] Operation for "delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]" failed. No retries permitted until 2022-09-04 20:27:24.24449338 +0000 UTC m=+313.331987047 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952) since it's in attaching or detaching state
I0904 20:27:22.250405       1 controller.go:272] Triggering nodeSync
I0904 20:27:22.250428       1 controller.go:291] nodeSync has been triggered
I0904 20:27:22.250436       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:27:22.250445       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:27:22.250557       1 controller.go:731] It took 0.000121101 seconds to finish nodeSyncInternal
I0904 20:27:22.258653       1 gc_controller.go:161] GC'ing orphaned
... skipping 10 lines ...
I0904 20:27:35.186055       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 21 items received
I0904 20:27:35.602369       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0904 20:27:36.199688       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 3 items received
I0904 20:27:36.552061       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 0 items received
I0904 20:27:37.232841       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:27:37.232985       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" with version 1253
I0904 20:27:37.233029       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase: Failed, bound to: "azuredisk-8081/pvc-8cnsq (uid: b517843f-1fc1-4ef9-913b-211e2755d952)", boundByController: true
I0904 20:27:37.233105       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is bound to claim azuredisk-8081/pvc-8cnsq
I0904 20:27:37.233125       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: claim azuredisk-8081/pvc-8cnsq not found
I0904 20:27:37.233134       1 pv_controller.go:1108] reclaimVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: policy is Delete
I0904 20:27:37.233150       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]]
I0904 20:27:37.233186       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b517843f-1fc1-4ef9-913b-211e2755d952] started
I0904 20:27:37.236568       1 pv_controller.go:1340] isVolumeReleased[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is released
... skipping 3 lines ...
I0904 20:27:42.259008       1 gc_controller.go:161] GC'ing orphaned
I0904 20:27:42.259033       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:27:42.457948       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b517843f-1fc1-4ef9-913b-211e2755d952
I0904 20:27:42.457984       1 pv_controller.go:1435] volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" deleted
I0904 20:27:42.457995       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: success
I0904 20:27:42.470597       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b517843f-1fc1-4ef9-913b-211e2755d952" with version 1307
I0904 20:27:42.470823       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: phase: Failed, bound to: "azuredisk-8081/pvc-8cnsq (uid: b517843f-1fc1-4ef9-913b-211e2755d952)", boundByController: true
I0904 20:27:42.470963       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: volume is bound to claim azuredisk-8081/pvc-8cnsq
I0904 20:27:42.471045       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: claim azuredisk-8081/pvc-8cnsq not found
I0904 20:27:42.471091       1 pv_controller.go:1108] reclaimVolume[pvc-b517843f-1fc1-4ef9-913b-211e2755d952]: policy is Delete
I0904 20:27:42.471130       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b517843f-1fc1-4ef9-913b-211e2755d952[ff324a61-5f3d-4734-b9b0-329e8d9a1533]]
I0904 20:27:42.471188       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b517843f-1fc1-4ef9-913b-211e2755d952] started
I0904 20:27:42.470771       1 pv_protection_controller.go:205] Got event on PV pvc-b517843f-1fc1-4ef9-913b-211e2755d952
... skipping 44 lines ...
I0904 20:27:49.800352       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5466/pvc-l9s8f" with version 1343
I0904 20:27:49.800557       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-5466/pvc-l9s8f"
I0904 20:27:49.801985       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee StorageAccountType:StandardSSD_LRS Size:10
I0904 20:27:51.218498       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="69.5µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37042" resp=200
I0904 20:27:52.135118       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0904 20:27:52.159821       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-28sq5, uid fd9a11c8-e115-4bfd-8dda-02c0a5d4659c, event type delete
E0904 20:27:52.170370       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-bpnpl" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0904 20:27:52.175771       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:27:52.182118       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-lpnkf.1711c226cb4a610e, uid 17fb446f-ab45-4c63-98b1-6ff382616c64, event type delete
I0904 20:27:52.185205       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-lpnkf.1711c228247b011a, uid 9f079721-8ae7-451d-9ad0-218ac8128cef, event type delete
I0904 20:27:52.188396       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-lpnkf.1711c2294fbd39d0, uid 758ca02a-8467-4e25-ad76-701dd0df57e0, event type delete
I0904 20:27:52.192551       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee StorageAccountType:StandardSSD_LRS Size:10
I0904 20:27:52.193879       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-lpnkf.1711c22992151039, uid d4f77bb2-3be7-4e19-b69a-16997e9194e9, event type delete
... skipping 476 lines ...
I0904 20:29:57.787663       1 pv_controller.go:1108] reclaimVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: policy is Delete
I0904 20:29:57.787728       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee] started
I0904 20:29:57.787733       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]]
I0904 20:29:57.787876       1 pv_controller.go:1763] operation "delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]" is already running, skipping
I0904 20:29:57.789340       1 pv_controller.go:1340] isVolumeReleased[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is released
I0904 20:29:57.789355       1 pv_controller.go:1404] doDeleteVolume [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]
I0904 20:29:58.090570       1 pv_controller.go:1259] deletion of volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:29:58.090593       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: set phase Failed
I0904 20:29:58.090601       1 pv_controller.go:858] updating PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: set phase Failed
I0904 20:29:58.094873       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" with version 1577
I0904 20:29:58.094907       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase: Failed, bound to: "azuredisk-5466/pvc-l9s8f (uid: bfb1295c-5bba-4952-b819-cc7a39642eee)", boundByController: true
I0904 20:29:58.094931       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is bound to claim azuredisk-5466/pvc-l9s8f
I0904 20:29:58.094947       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: claim azuredisk-5466/pvc-l9s8f not found
I0904 20:29:58.094954       1 pv_controller.go:1108] reclaimVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: policy is Delete
I0904 20:29:58.094967       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]]
I0904 20:29:58.094973       1 pv_controller.go:1763] operation "delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]" is already running, skipping
I0904 20:29:58.094989       1 pv_protection_controller.go:205] Got event on PV pvc-bfb1295c-5bba-4952-b819-cc7a39642eee
I0904 20:29:58.095022       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" with version 1577
I0904 20:29:58.095032       1 pv_controller.go:879] volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" entered phase "Failed"
I0904 20:29:58.095038       1 pv_controller.go:901] volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
E0904 20:29:58.095094       1 goroutinemap.go:150] Operation for "delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]" failed. No retries permitted until 2022-09-04 20:29:58.595059783 +0000 UTC m=+467.682553450 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:29:58.095175       1 event.go:291] "Event occurred" object="pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted"
I0904 20:30:01.218308       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="57.3µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46126" resp=200
I0904 20:30:01.671071       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 54 items received
I0904 20:30:02.262581       1 gc_controller.go:161] GC'ing orphaned
I0904 20:30:02.262607       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:30:02.894210       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:30:06.122085       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:30:07.239888       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:30:07.240095       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" with version 1577
I0904 20:30:07.240198       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase: Failed, bound to: "azuredisk-5466/pvc-l9s8f (uid: bfb1295c-5bba-4952-b819-cc7a39642eee)", boundByController: true
I0904 20:30:07.240290       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is bound to claim azuredisk-5466/pvc-l9s8f
I0904 20:30:07.240360       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: claim azuredisk-5466/pvc-l9s8f not found
I0904 20:30:07.240421       1 pv_controller.go:1108] reclaimVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: policy is Delete
I0904 20:30:07.240503       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]]
I0904 20:30:07.240611       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee] started
I0904 20:30:07.242729       1 pv_controller.go:1340] isVolumeReleased[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is released
I0904 20:30:07.242876       1 pv_controller.go:1404] doDeleteVolume [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]
I0904 20:30:07.264536       1 pv_controller.go:1259] deletion of volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:30:07.264556       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: set phase Failed
I0904 20:30:07.264565       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase Failed already set
E0904 20:30:07.264591       1 goroutinemap.go:150] Operation for "delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]" failed. No retries permitted until 2022-09-04 20:30:08.264572966 +0000 UTC m=+477.352066733 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:30:07.280318       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
I0904 20:30:07.280401       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee to the node "capz-l9y77r-md-0-x4pd8" mounted false
I0904 20:30:07.311598       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-x4pd8 ReadyCondition updated. Updating timestamp.
I0904 20:30:07.347108       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-l9y77r-md-0-x4pd8" succeeded. VolumesAttached: []
I0904 20:30:07.348483       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee") on node "capz-l9y77r-md-0-x4pd8" 
I0904 20:30:07.349091       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
... skipping 7 lines ...
I0904 20:30:11.222671       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36008" resp=200
I0904 20:30:19.193030       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 153 items received
I0904 20:30:21.218337       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="105.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48570" resp=200
I0904 20:30:22.178726       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:30:22.240825       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:30:22.240935       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" with version 1577
I0904 20:30:22.241006       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase: Failed, bound to: "azuredisk-5466/pvc-l9s8f (uid: bfb1295c-5bba-4952-b819-cc7a39642eee)", boundByController: true
I0904 20:30:22.241072       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is bound to claim azuredisk-5466/pvc-l9s8f
I0904 20:30:22.241124       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: claim azuredisk-5466/pvc-l9s8f not found
I0904 20:30:22.241138       1 pv_controller.go:1108] reclaimVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: policy is Delete
I0904 20:30:22.241153       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]]
I0904 20:30:22.241184       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee] started
I0904 20:30:22.248535       1 pv_controller.go:1340] isVolumeReleased[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is released
I0904 20:30:22.248552       1 pv_controller.go:1404] doDeleteVolume [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]
I0904 20:30:22.248696       1 pv_controller.go:1259] deletion of volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) since it's in attaching or detaching state
I0904 20:30:22.248723       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: set phase Failed
I0904 20:30:22.248733       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase Failed already set
E0904 20:30:22.248836       1 goroutinemap.go:150] Operation for "delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]" failed. No retries permitted until 2022-09-04 20:30:24.248816452 +0000 UTC m=+493.336310119 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) since it's in attaching or detaching state
I0904 20:30:22.262970       1 gc_controller.go:161] GC'ing orphaned
I0904 20:30:22.262987       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:30:22.373887       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:30:22.775323       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:30:23.371713       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-x4pd8) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) returned with <nil>
I0904 20:30:23.371745       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee) succeeded
... skipping 4 lines ...
I0904 20:30:31.218282       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="144.001µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38550" resp=200
I0904 20:30:32.184172       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0904 20:30:32.316040       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-qlvdg ReadyCondition updated. Updating timestamp.
I0904 20:30:34.188827       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0904 20:30:37.241223       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:30:37.241372       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" with version 1577
I0904 20:30:37.241452       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase: Failed, bound to: "azuredisk-5466/pvc-l9s8f (uid: bfb1295c-5bba-4952-b819-cc7a39642eee)", boundByController: true
I0904 20:30:37.241536       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is bound to claim azuredisk-5466/pvc-l9s8f
I0904 20:30:37.241590       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: claim azuredisk-5466/pvc-l9s8f not found
I0904 20:30:37.241605       1 pv_controller.go:1108] reclaimVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: policy is Delete
I0904 20:30:37.241623       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]]
I0904 20:30:37.241657       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee] started
I0904 20:30:37.246278       1 pv_controller.go:1340] isVolumeReleased[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is released
... skipping 10 lines ...
I0904 20:30:42.263251       1 gc_controller.go:161] GC'ing orphaned
I0904 20:30:42.263270       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:30:42.426220       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee
I0904 20:30:42.426247       1 pv_controller.go:1435] volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" deleted
I0904 20:30:42.426258       1 pv_controller.go:1283] deleteVolumeOperation [pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: success
I0904 20:30:42.430498       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bfb1295c-5bba-4952-b819-cc7a39642eee" with version 1642
I0904 20:30:42.430538       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: phase: Failed, bound to: "azuredisk-5466/pvc-l9s8f (uid: bfb1295c-5bba-4952-b819-cc7a39642eee)", boundByController: true
I0904 20:30:42.430951       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: volume is bound to claim azuredisk-5466/pvc-l9s8f
I0904 20:30:42.431118       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: claim azuredisk-5466/pvc-l9s8f not found
I0904 20:30:42.431236       1 pv_controller.go:1108] reclaimVolume[pvc-bfb1295c-5bba-4952-b819-cc7a39642eee]: policy is Delete
I0904 20:30:42.431330       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]]
I0904 20:30:42.431422       1 pv_controller.go:1763] operation "delete-pvc-bfb1295c-5bba-4952-b819-cc7a39642eee[43fd5738-513b-4f9b-b92b-4a24e7c20786]" is already running, skipping
I0904 20:30:42.430912       1 pv_protection_controller.go:205] Got event on PV pvc-bfb1295c-5bba-4952-b819-cc7a39642eee
... skipping 106 lines ...
I0904 20:30:48.608306       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" lun 0 to node "capz-l9y77r-md-0-qlvdg".
I0904 20:30:48.608448       1 azure_controller_standard.go:93] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - attach disk(capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) with DiskEncryptionSetID()
I0904 20:30:49.486531       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5466
I0904 20:30:49.535007       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5466, name kube-root-ca.crt, uid 5b9fbc49-1b2a-4029-9510-4b8b802f1aab, event type delete
I0904 20:30:49.539559       1 publisher.go:186] Finished syncing namespace "azuredisk-5466" (4.940537ms)
I0904 20:30:49.541750       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-7bttk, uid 89bc7acb-a2bf-4725-9c39-8ee6f364366a, event type delete
E0904 20:30:49.552921       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-9tzfz" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0904 20:30:49.558505       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-64npl.1711c23591400ea7, uid c9082711-ac9c-4f27-a637-d068b9cb97a6, event type delete
I0904 20:30:49.561156       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-64npl.1711c236e06768ae, uid 67ca8afc-a88e-4d6c-aaf7-e6b7da0c33aa, event type delete
I0904 20:30:49.564645       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-64npl.1711c237698d6d3e, uid 63958e63-493a-41d8-affe-0380c077c383, event type delete
I0904 20:30:49.567866       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-64npl.1711c237f01e41d7, uid 6dcf908d-7ac3-4cbb-a9cc-c84cc961deed, event type delete
I0904 20:30:49.570513       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-64npl.1711c2388f25bd7c, uid 7cc2bb5a-6657-4373-8b2a-74082fac60b6, event type delete
I0904 20:30:49.573060       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-64npl.1711c2396afbcf8f, uid 95c839af-eee3-4dcb-b2d1-a4cc6b97cf36, event type delete
... skipping 131 lines ...
I0904 20:31:00.234051       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: claim azuredisk-2790/pvc-c95vm not found
I0904 20:31:00.234156       1 pv_controller.go:1108] reclaimVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: policy is Delete
I0904 20:31:00.234319       1 pv_controller.go:1752] scheduleOperation[delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]]
I0904 20:31:00.234417       1 pv_controller.go:1763] operation "delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]" is already running, skipping
I0904 20:31:00.235679       1 pv_controller.go:1340] isVolumeReleased[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is released
I0904 20:31:00.235702       1 pv_controller.go:1404] doDeleteVolume [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]
I0904 20:31:00.280081       1 pv_controller.go:1259] deletion of volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:31:00.280098       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: set phase Failed
I0904 20:31:00.280107       1 pv_controller.go:858] updating PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: set phase Failed
I0904 20:31:00.283055       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" with version 1734
I0904 20:31:00.283084       1 pv_controller.go:879] volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" entered phase "Failed"
I0904 20:31:00.283093       1 pv_controller.go:901] volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
E0904 20:31:00.283131       1 goroutinemap.go:150] Operation for "delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]" failed. No retries permitted until 2022-09-04 20:31:00.783112686 +0000 UTC m=+529.870606353 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:31:00.283358       1 event.go:291] "Event occurred" object="pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted"
I0904 20:31:00.283472       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" with version 1734
I0904 20:31:00.283853       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase: Failed, bound to: "azuredisk-2790/pvc-c95vm (uid: 68e7a07a-e822-41cd-bfcb-d06bf93e0b69)", boundByController: true
I0904 20:31:00.283884       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is bound to claim azuredisk-2790/pvc-c95vm
I0904 20:31:00.283904       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: claim azuredisk-2790/pvc-c95vm not found
I0904 20:31:00.283960       1 pv_controller.go:1108] reclaimVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: policy is Delete
I0904 20:31:00.284014       1 pv_controller.go:1752] scheduleOperation[delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]]
I0904 20:31:00.284074       1 pv_controller.go:1765] operation "delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]" postponed due to exponential backoff
I0904 20:31:00.283795       1 pv_protection_controller.go:205] Got event on PV pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69
... skipping 2 lines ...
I0904 20:31:02.263929       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:31:02.321519       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-qlvdg ReadyCondition updated. Updating timestamp.
I0904 20:31:04.895228       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:31:05.909178       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:31:07.242067       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:31:07.242368       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" with version 1734
I0904 20:31:07.242522       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase: Failed, bound to: "azuredisk-2790/pvc-c95vm (uid: 68e7a07a-e822-41cd-bfcb-d06bf93e0b69)", boundByController: true
I0904 20:31:07.242666       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is bound to claim azuredisk-2790/pvc-c95vm
I0904 20:31:07.242754       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: claim azuredisk-2790/pvc-c95vm not found
I0904 20:31:07.242846       1 pv_controller.go:1108] reclaimVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: policy is Delete
I0904 20:31:07.242868       1 pv_controller.go:1752] scheduleOperation[delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]]
I0904 20:31:07.243030       1 pv_controller.go:1231] deleteVolumeOperation [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69] started
I0904 20:31:07.248390       1 pv_controller.go:1340] isVolumeReleased[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is released
I0904 20:31:07.248407       1 pv_controller.go:1404] doDeleteVolume [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]
I0904 20:31:07.269588       1 pv_controller.go:1259] deletion of volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:31:07.269606       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: set phase Failed
I0904 20:31:07.269614       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase Failed already set
E0904 20:31:07.269669       1 goroutinemap.go:150] Operation for "delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]" failed. No retries permitted until 2022-09-04 20:31:08.26963871 +0000 UTC m=+537.357132377 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:31:07.375710       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:31:09.858905       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:31:09.858928       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 to the node "capz-l9y77r-md-0-qlvdg" mounted false
I0904 20:31:09.963621       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-l9y77r-md-0-qlvdg" succeeded. VolumesAttached: []
I0904 20:31:09.963713       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69") on node "capz-l9y77r-md-0-qlvdg" 
I0904 20:31:09.963915       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
... skipping 8 lines ...
I0904 20:31:17.671060       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 110 items received
I0904 20:31:20.652283       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0904 20:31:21.218104       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.501µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58714" resp=200
I0904 20:31:22.179926       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:31:22.242912       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:31:22.243027       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" with version 1734
I0904 20:31:22.243082       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase: Failed, bound to: "azuredisk-2790/pvc-c95vm (uid: 68e7a07a-e822-41cd-bfcb-d06bf93e0b69)", boundByController: true
I0904 20:31:22.243138       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is bound to claim azuredisk-2790/pvc-c95vm
I0904 20:31:22.243180       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: claim azuredisk-2790/pvc-c95vm not found
I0904 20:31:22.243208       1 pv_controller.go:1108] reclaimVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: policy is Delete
I0904 20:31:22.243249       1 pv_controller.go:1752] scheduleOperation[delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]]
I0904 20:31:22.243314       1 pv_controller.go:1231] deleteVolumeOperation [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69] started
I0904 20:31:22.248729       1 pv_controller.go:1340] isVolumeReleased[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is released
I0904 20:31:22.248746       1 pv_controller.go:1404] doDeleteVolume [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]
I0904 20:31:22.248777       1 pv_controller.go:1259] deletion of volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) since it's in attaching or detaching state
I0904 20:31:22.248791       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: set phase Failed
I0904 20:31:22.248801       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase Failed already set
E0904 20:31:22.248828       1 goroutinemap.go:150] Operation for "delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]" failed. No retries permitted until 2022-09-04 20:31:24.248810149 +0000 UTC m=+553.336303816 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) since it's in attaching or detaching state
I0904 20:31:22.264338       1 gc_controller.go:161] GC'ing orphaned
I0904 20:31:22.264359       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:31:22.376327       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:31:22.805785       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:31:25.523163       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) returned with <nil>
I0904 20:31:25.523277       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69) succeeded
I0904 20:31:25.523308       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69 was detached from node:capz-l9y77r-md-0-qlvdg
I0904 20:31:25.523367       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69") on node "capz-l9y77r-md-0-qlvdg" 
I0904 20:31:31.218175       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="63.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51262" resp=200
I0904 20:31:34.753478       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 0 items received
I0904 20:31:36.907735       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 22 items received
I0904 20:31:37.243394       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:31:37.243591       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" with version 1734
I0904 20:31:37.243629       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase: Failed, bound to: "azuredisk-2790/pvc-c95vm (uid: 68e7a07a-e822-41cd-bfcb-d06bf93e0b69)", boundByController: true
I0904 20:31:37.243666       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is bound to claim azuredisk-2790/pvc-c95vm
I0904 20:31:37.243689       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: claim azuredisk-2790/pvc-c95vm not found
I0904 20:31:37.243698       1 pv_controller.go:1108] reclaimVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: policy is Delete
I0904 20:31:37.243712       1 pv_controller.go:1752] scheduleOperation[delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]]
I0904 20:31:37.243752       1 pv_controller.go:1231] deleteVolumeOperation [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69] started
I0904 20:31:37.247095       1 pv_controller.go:1340] isVolumeReleased[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is released
... skipping 4 lines ...
I0904 20:31:42.264772       1 gc_controller.go:161] GC'ing orphaned
I0904 20:31:42.264798       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:31:42.453833       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69
I0904 20:31:42.453862       1 pv_controller.go:1435] volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" deleted
I0904 20:31:42.453894       1 pv_controller.go:1283] deleteVolumeOperation [pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: success
I0904 20:31:42.461446       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69" with version 1796
I0904 20:31:42.461602       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: phase: Failed, bound to: "azuredisk-2790/pvc-c95vm (uid: 68e7a07a-e822-41cd-bfcb-d06bf93e0b69)", boundByController: true
I0904 20:31:42.461668       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: volume is bound to claim azuredisk-2790/pvc-c95vm
I0904 20:31:42.461742       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: claim azuredisk-2790/pvc-c95vm not found
I0904 20:31:42.461532       1 pv_protection_controller.go:205] Got event on PV pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69
I0904 20:31:42.461828       1 pv_protection_controller.go:125] Processing PV pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69
I0904 20:31:42.462222       1 pv_controller.go:1108] reclaimVolume[pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69]: policy is Delete
I0904 20:31:42.462290       1 pv_controller.go:1752] scheduleOperation[delete-pvc-68e7a07a-e822-41cd-bfcb-d06bf93e0b69[7fdefd9c-ff3b-4557-8010-3ca899152ffa]]
... skipping 106 lines ...
I0904 20:31:50.188031       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 11 items received
I0904 20:31:50.194644       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782" to node "capz-l9y77r-md-0-qlvdg".
I0904 20:31:50.214476       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782" lun 0 to node "capz-l9y77r-md-0-qlvdg".
I0904 20:31:50.214508       1 azure_controller_standard.go:93] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - attach disk(capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) with DiskEncryptionSetID()
I0904 20:31:51.102578       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0904 20:31:51.124964       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-c26s8, uid 100dd6b6-9e5e-4898-9b07-d2b81810d502, event type delete
E0904 20:31:51.138560       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-966wp" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0904 20:31:51.159784       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
I0904 20:31:51.159928       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2790, name default, uid 5a7cfa57-f240-4ac8-a573-bf3ffcdd1a7c, event type delete
I0904 20:31:51.160036       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (2.4µs)
I0904 20:31:51.201244       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-mrq5f.1711c25e75b6fd6a, uid da66d29d-6481-49f9-9794-26af70d2e4f8, event type delete
I0904 20:31:51.210599       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-mrq5f.1711c25fc7aacefe, uid adf44411-95b6-460a-9f6e-42b5f9546c0d, event type delete
I0904 20:31:51.218815       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="61.3µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49028" resp=200
... skipping 136 lines ...
I0904 20:32:05.998616       1 pv_controller.go:1108] reclaimVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: policy is Delete
I0904 20:32:05.998626       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]]
I0904 20:32:05.998633       1 pv_controller.go:1763] operation "delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]" is already running, skipping
I0904 20:32:05.998657       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a3f21a61-c478-4872-afdd-a63ef0705782] started
I0904 20:32:06.000760       1 pv_controller.go:1340] isVolumeReleased[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is released
I0904 20:32:06.000775       1 pv_controller.go:1404] doDeleteVolume [pvc-a3f21a61-c478-4872-afdd-a63ef0705782]
I0904 20:32:06.022476       1 pv_controller.go:1259] deletion of volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:32:06.022495       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: set phase Failed
I0904 20:32:06.022504       1 pv_controller.go:858] updating PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: set phase Failed
I0904 20:32:06.025036       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" with version 1885
I0904 20:32:06.025061       1 pv_controller.go:879] volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" entered phase "Failed"
I0904 20:32:06.025070       1 pv_controller.go:901] volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
E0904 20:32:06.025104       1 goroutinemap.go:150] Operation for "delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]" failed. No retries permitted until 2022-09-04 20:32:06.525087226 +0000 UTC m=+595.612580893 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:32:06.025547       1 event.go:291] "Event occurred" object="pvc-a3f21a61-c478-4872-afdd-a63ef0705782" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted"
I0904 20:32:06.025659       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" with version 1885
I0904 20:32:06.025941       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase: Failed, bound to: "azuredisk-5356/pvc-kshwn (uid: a3f21a61-c478-4872-afdd-a63ef0705782)", boundByController: true
I0904 20:32:06.026062       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is bound to claim azuredisk-5356/pvc-kshwn
I0904 20:32:06.026177       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: claim azuredisk-5356/pvc-kshwn not found
I0904 20:32:06.026280       1 pv_controller.go:1108] reclaimVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: policy is Delete
I0904 20:32:06.026380       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]]
I0904 20:32:06.026503       1 pv_controller.go:1765] operation "delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]" postponed due to exponential backoff
I0904 20:32:06.025670       1 pv_protection_controller.go:205] Got event on PV pvc-a3f21a61-c478-4872-afdd-a63ef0705782
I0904 20:32:07.244798       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:07.244861       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" with version 1885
I0904 20:32:07.244909       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase: Failed, bound to: "azuredisk-5356/pvc-kshwn (uid: a3f21a61-c478-4872-afdd-a63ef0705782)", boundByController: true
I0904 20:32:07.244946       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is bound to claim azuredisk-5356/pvc-kshwn
I0904 20:32:07.244966       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: claim azuredisk-5356/pvc-kshwn not found
I0904 20:32:07.244976       1 pv_controller.go:1108] reclaimVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: policy is Delete
I0904 20:32:07.244992       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]]
I0904 20:32:07.245033       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a3f21a61-c478-4872-afdd-a63ef0705782] started
I0904 20:32:07.249925       1 pv_controller.go:1340] isVolumeReleased[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is released
I0904 20:32:07.249942       1 pv_controller.go:1404] doDeleteVolume [pvc-a3f21a61-c478-4872-afdd-a63ef0705782]
I0904 20:32:07.281806       1 pv_controller.go:1259] deletion of volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:32:07.282486       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: set phase Failed
I0904 20:32:07.282504       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase Failed already set
E0904 20:32:07.282545       1 goroutinemap.go:150] Operation for "delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]" failed. No retries permitted until 2022-09-04 20:32:08.282514215 +0000 UTC m=+597.370007882 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:32:07.378521       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:08.803372       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 15 items received
I0904 20:32:09.901380       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:32:09.901408       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782 to the node "capz-l9y77r-md-0-qlvdg" mounted false
I0904 20:32:09.971974       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:32:09.972071       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782 to the node "capz-l9y77r-md-0-qlvdg" mounted false
... skipping 8 lines ...
I0904 20:32:21.217960       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:33992" resp=200
I0904 20:32:22.171578       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:22.175711       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:22.181850       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:22.244905       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:22.245092       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" with version 1885
I0904 20:32:22.245177       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase: Failed, bound to: "azuredisk-5356/pvc-kshwn (uid: a3f21a61-c478-4872-afdd-a63ef0705782)", boundByController: true
I0904 20:32:22.245215       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is bound to claim azuredisk-5356/pvc-kshwn
I0904 20:32:22.245234       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: claim azuredisk-5356/pvc-kshwn not found
I0904 20:32:22.245242       1 pv_controller.go:1108] reclaimVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: policy is Delete
I0904 20:32:22.245256       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]]
I0904 20:32:22.245288       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a3f21a61-c478-4872-afdd-a63ef0705782] started
I0904 20:32:22.252360       1 controller.go:272] Triggering nodeSync
I0904 20:32:22.252539       1 controller.go:291] nodeSync has been triggered
I0904 20:32:22.252628       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:32:22.252730       1 controller.go:804] Finished updateLoadBalancerHosts
I0904 20:32:22.252819       1 controller.go:731] It took 0.000192401 seconds to finish nodeSyncInternal
I0904 20:32:22.254132       1 pv_controller.go:1340] isVolumeReleased[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is released
I0904 20:32:22.254148       1 pv_controller.go:1404] doDeleteVolume [pvc-a3f21a61-c478-4872-afdd-a63ef0705782]
I0904 20:32:22.254199       1 pv_controller.go:1259] deletion of volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) since it's in attaching or detaching state
I0904 20:32:22.254225       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: set phase Failed
I0904 20:32:22.254234       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase Failed already set
E0904 20:32:22.254289       1 goroutinemap.go:150] Operation for "delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]" failed. No retries permitted until 2022-09-04 20:32:24.254241845 +0000 UTC m=+613.341735612 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782) since it's in attaching or detaching state
I0904 20:32:22.266537       1 gc_controller.go:161] GC'ing orphaned
I0904 20:32:22.266558       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:32:22.360245       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0904 20:32:22.379519       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:22.841076       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:32:23.001009       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:32:23.000959906 +0000 UTC m=+612.088453673"
... skipping 6 lines ...
I0904 20:32:25.523496       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782") on node "capz-l9y77r-md-0-qlvdg" 
I0904 20:32:26.000803       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 20:32:26.000750253 +0000 UTC m=+615.088244020"
I0904 20:32:26.001254       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="491.704µs"
I0904 20:32:31.218908       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="54.1µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58478" resp=200
I0904 20:32:37.245780       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:37.245963       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" with version 1885
I0904 20:32:37.246089       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase: Failed, bound to: "azuredisk-5356/pvc-kshwn (uid: a3f21a61-c478-4872-afdd-a63ef0705782)", boundByController: true
I0904 20:32:37.246196       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is bound to claim azuredisk-5356/pvc-kshwn
I0904 20:32:37.246219       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: claim azuredisk-5356/pvc-kshwn not found
I0904 20:32:37.246229       1 pv_controller.go:1108] reclaimVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: policy is Delete
I0904 20:32:37.246244       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]]
I0904 20:32:37.246310       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a3f21a61-c478-4872-afdd-a63ef0705782] started
I0904 20:32:37.255272       1 pv_controller.go:1340] isVolumeReleased[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is released
... skipping 3 lines ...
I0904 20:32:42.267381       1 gc_controller.go:161] GC'ing orphaned
I0904 20:32:42.267406       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:32:42.472930       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-a3f21a61-c478-4872-afdd-a63ef0705782
I0904 20:32:42.472964       1 pv_controller.go:1435] volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" deleted
I0904 20:32:42.472995       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: success
I0904 20:32:42.482084       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a3f21a61-c478-4872-afdd-a63ef0705782" with version 1944
I0904 20:32:42.482367       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: phase: Failed, bound to: "azuredisk-5356/pvc-kshwn (uid: a3f21a61-c478-4872-afdd-a63ef0705782)", boundByController: true
I0904 20:32:42.482409       1 pv_protection_controller.go:205] Got event on PV pvc-a3f21a61-c478-4872-afdd-a63ef0705782
I0904 20:32:42.482776       1 pv_protection_controller.go:125] Processing PV pvc-a3f21a61-c478-4872-afdd-a63ef0705782
I0904 20:32:42.483371       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: volume is bound to claim azuredisk-5356/pvc-kshwn
I0904 20:32:42.483501       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: claim azuredisk-5356/pvc-kshwn not found
I0904 20:32:42.483601       1 pv_controller.go:1108] reclaimVolume[pvc-a3f21a61-c478-4872-afdd-a63ef0705782]: policy is Delete
I0904 20:32:42.483706       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a3f21a61-c478-4872-afdd-a63ef0705782[d61f3502-ed92-45e7-8ef2-a11c2ef8a5bf]]
... skipping 105 lines ...
I0904 20:32:50.875521       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" to node "capz-l9y77r-md-0-qlvdg".
I0904 20:32:50.910642       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" lun 0 to node "capz-l9y77r-md-0-qlvdg".
I0904 20:32:50.910680       1 azure_controller_standard.go:93] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - attach disk(capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) with DiskEncryptionSetID()
I0904 20:32:51.218858       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="52.2µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58406" resp=200
I0904 20:32:51.792377       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0904 20:32:51.808846       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-llmcf, uid 5153e416-3170-467a-abd4-f1266d623b19, event type delete
E0904 20:32:51.822764       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-plqsf" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0904 20:32:51.840541       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (3µs)
I0904 20:32:51.840636       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0904 20:32:51.840759       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid 1df36e9d-daeb-43ea-942d-90bba5e21efc, event type delete
I0904 20:32:51.849964       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-d9zkv.1711c26ccd1afbea, uid 96b15630-0e11-47cc-a6fd-f73eb7cdf176, event type delete
I0904 20:32:51.854625       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-d9zkv.1711c26e1dfc50dc, uid ed1d89a5-8f9b-4def-9f9a-628213b7e8eb, event type delete
I0904 20:32:51.856055       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-d9zkv.1711c26fc3499e02, uid a7a9d24c-d32a-4626-8085-b00830bafe95, event type delete
... skipping 656 lines ...
I0904 20:34:05.223201       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]]
I0904 20:34:05.223207       1 pv_controller.go:1763] operation "delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]" is already running, skipping
I0904 20:34:05.223230       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54] started
I0904 20:34:05.227726       1 pv_controller.go:1340] isVolumeReleased[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is released
I0904 20:34:05.227740       1 pv_controller.go:1404] doDeleteVolume [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]
I0904 20:34:05.254511       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 on node "capz-l9y77r-md-0-x4pd8"
I0904 20:34:05.254937       1 pv_controller.go:1259] deletion of volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:34:05.254956       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: set phase Failed
I0904 20:34:05.254970       1 pv_controller.go:858] updating PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: set phase Failed
I0904 20:34:05.266190       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" with version 2165
I0904 20:34:05.266222       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase: Failed, bound to: "azuredisk-5194/pvc-nngt5 (uid: 5dd447fc-e013-4ce6-ac8c-c1ee6013be54)", boundByController: true
I0904 20:34:05.266243       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is bound to claim azuredisk-5194/pvc-nngt5
I0904 20:34:05.266262       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: claim azuredisk-5194/pvc-nngt5 not found
I0904 20:34:05.266269       1 pv_controller.go:1108] reclaimVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: policy is Delete
I0904 20:34:05.266282       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]]
I0904 20:34:05.266289       1 pv_controller.go:1763] operation "delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]" is already running, skipping
I0904 20:34:05.266301       1 pv_protection_controller.go:205] Got event on PV pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54
I0904 20:34:05.270826       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" with version 2165
I0904 20:34:05.271049       1 pv_controller.go:879] volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" entered phase "Failed"
I0904 20:34:05.271081       1 pv_controller.go:901] volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
E0904 20:34:05.271154       1 goroutinemap.go:150] Operation for "delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]" failed. No retries permitted until 2022-09-04 20:34:05.771136044 +0000 UTC m=+714.858629811 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:34:05.271656       1 event.go:291] "Event occurred" object="pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted"
I0904 20:34:07.249929       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:34:07.250132       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" with version 2165
I0904 20:34:07.250178       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-ztthq" with version 1972
I0904 20:34:07.250214       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: phase: Bound, bound to: "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36", bindCompleted: true, boundByController: true
I0904 20:34:07.250301       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase: Failed, bound to: "azuredisk-5194/pvc-nngt5 (uid: 5dd447fc-e013-4ce6-ac8c-c1ee6013be54)", boundByController: true
I0904 20:34:07.250332       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is bound to claim azuredisk-5194/pvc-nngt5
I0904 20:34:07.250398       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: claim azuredisk-5194/pvc-nngt5 not found
I0904 20:34:07.250406       1 pv_controller.go:1108] reclaimVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: policy is Delete
I0904 20:34:07.250444       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]]
I0904 20:34:07.250513       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 1969
I0904 20:34:07.250532       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase: Bound, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
... skipping 39 lines ...
I0904 20:34:07.252129       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: claim azuredisk-5194/pvc-b6r46 found: phase: Bound, bound to: "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04", bindCompleted: true, boundByController: true
I0904 20:34:07.252240       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: all is bound
I0904 20:34:07.252336       1 pv_controller.go:858] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: set phase Bound
I0904 20:34:07.252558       1 pv_controller.go:861] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase Bound already set
I0904 20:34:07.253482       1 pv_controller.go:1340] isVolumeReleased[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is released
I0904 20:34:07.253497       1 pv_controller.go:1404] doDeleteVolume [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]
I0904 20:34:07.317753       1 pv_controller.go:1259] deletion of volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:34:07.317772       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: set phase Failed
I0904 20:34:07.317782       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase Failed already set
E0904 20:34:07.317810       1 goroutinemap.go:150] Operation for "delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]" failed. No retries permitted until 2022-09-04 20:34:08.317790834 +0000 UTC m=+717.405284501 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:34:07.384293       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:34:07.445551       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
I0904 20:34:07.445760       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 to the node "capz-l9y77r-md-0-x4pd8" mounted true
I0904 20:34:07.445795       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54 to the node "capz-l9y77r-md-0-x4pd8" mounted false
I0904 20:34:07.475173       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04\"}]}}" for node "capz-l9y77r-md-0-x4pd8" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04 0}]
I0904 20:34:07.475656       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54") on node "capz-l9y77r-md-0-x4pd8" 
... skipping 27 lines ...
I0904 20:34:22.251412       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: all is bound
I0904 20:34:22.251420       1 pv_controller.go:858] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: set phase Bound
I0904 20:34:22.251428       1 pv_controller.go:861] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase Bound already set
I0904 20:34:22.251059       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" found: phase: Bound, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
I0904 20:34:22.251497       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: claim is already correctly bound
I0904 20:34:22.251541       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" with version 2165
I0904 20:34:22.251564       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase: Failed, bound to: "azuredisk-5194/pvc-nngt5 (uid: 5dd447fc-e013-4ce6-ac8c-c1ee6013be54)", boundByController: true
I0904 20:34:22.251675       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is bound to claim azuredisk-5194/pvc-nngt5
I0904 20:34:22.251546       1 pv_controller.go:1012] binding volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" to claim "azuredisk-5194/pvc-ztthq"
I0904 20:34:22.251757       1 pv_controller.go:910] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: binding to "azuredisk-5194/pvc-ztthq"
I0904 20:34:22.251837       1 pv_controller.go:922] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: already bound to "azuredisk-5194/pvc-ztthq"
I0904 20:34:22.251858       1 pv_controller.go:858] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: set phase Bound
I0904 20:34:22.251866       1 pv_controller.go:861] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase Bound already set
... skipping 23 lines ...
I0904 20:34:22.253489       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-b6r46] status: phase Bound already set
I0904 20:34:22.253577       1 pv_controller.go:1038] volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" bound to claim "azuredisk-5194/pvc-b6r46"
I0904 20:34:22.253674       1 pv_controller.go:1039] volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-b6r46 (uid: ec917b63-cb6c-4359-b58f-522ee7c50d04)", boundByController: true
I0904 20:34:22.253763       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-b6r46" status after binding: phase: Bound, bound to: "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04", bindCompleted: true, boundByController: true
I0904 20:34:22.261141       1 pv_controller.go:1340] isVolumeReleased[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is released
I0904 20:34:22.261156       1 pv_controller.go:1404] doDeleteVolume [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]
I0904 20:34:22.261188       1 pv_controller.go:1259] deletion of volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) since it's in attaching or detaching state
I0904 20:34:22.261200       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: set phase Failed
I0904 20:34:22.261209       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase Failed already set
E0904 20:34:22.261235       1 goroutinemap.go:150] Operation for "delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]" failed. No retries permitted until 2022-09-04 20:34:24.26121621 +0000 UTC m=+733.348709877 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) since it's in attaching or detaching state
I0904 20:34:22.269442       1 gc_controller.go:161] GC'ing orphaned
I0904 20:34:22.269463       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:34:22.384491       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:34:22.912471       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:34:23.104756       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-x4pd8) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) returned with <nil>
I0904 20:34:23.104796       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54) succeeded
... skipping 15 lines ...
I0904 20:34:37.252199       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is bound to claim azuredisk-5194/pvc-b6r46
I0904 20:34:37.252244       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: claim azuredisk-5194/pvc-b6r46 found: phase: Bound, bound to: "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04", bindCompleted: true, boundByController: true
I0904 20:34:37.252293       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: all is bound
I0904 20:34:37.252323       1 pv_controller.go:858] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: set phase Bound
I0904 20:34:37.252385       1 pv_controller.go:861] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase Bound already set
I0904 20:34:37.252421       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" with version 2165
I0904 20:34:37.252474       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase: Failed, bound to: "azuredisk-5194/pvc-nngt5 (uid: 5dd447fc-e013-4ce6-ac8c-c1ee6013be54)", boundByController: true
I0904 20:34:37.252528       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is bound to claim azuredisk-5194/pvc-nngt5
I0904 20:34:37.252563       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: claim azuredisk-5194/pvc-nngt5 not found
I0904 20:34:37.252588       1 pv_controller.go:1108] reclaimVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: policy is Delete
I0904 20:34:37.252629       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]]
I0904 20:34:37.252670       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54] started
I0904 20:34:37.251681       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: phase: Bound, bound to: "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36", bindCompleted: true, boundByController: true
... skipping 34 lines ...
I0904 20:34:42.270409       1 gc_controller.go:161] GC'ing orphaned
I0904 20:34:42.270439       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:34:42.463402       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54
I0904 20:34:42.463429       1 pv_controller.go:1435] volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" deleted
I0904 20:34:42.463441       1 pv_controller.go:1283] deleteVolumeOperation [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: success
I0904 20:34:42.473688       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54" with version 2220
I0904 20:34:42.473723       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: phase: Failed, bound to: "azuredisk-5194/pvc-nngt5 (uid: 5dd447fc-e013-4ce6-ac8c-c1ee6013be54)", boundByController: true
I0904 20:34:42.473762       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: volume is bound to claim azuredisk-5194/pvc-nngt5
I0904 20:34:42.473781       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: claim azuredisk-5194/pvc-nngt5 not found
I0904 20:34:42.473791       1 pv_controller.go:1108] reclaimVolume[pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54]: policy is Delete
I0904 20:34:42.473806       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54[fedc488e-a766-497d-80d4-466129a5d171]]
I0904 20:34:42.473860       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54] started
I0904 20:34:42.474096       1 pv_protection_controller.go:205] Got event on PV pvc-5dd447fc-e013-4ce6-ac8c-c1ee6013be54
... skipping 193 lines ...
I0904 20:35:15.342545       1 pv_controller.go:1108] reclaimVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: policy is Delete
I0904 20:35:15.342681       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]]
I0904 20:35:15.342797       1 pv_controller.go:1763] operation "delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]" is already running, skipping
I0904 20:35:15.342071       1 pv_protection_controller.go:205] Got event on PV pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04
I0904 20:35:15.344426       1 pv_controller.go:1340] isVolumeReleased[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is released
I0904 20:35:15.344444       1 pv_controller.go:1404] doDeleteVolume [pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]
I0904 20:35:15.368303       1 pv_controller.go:1259] deletion of volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:35:15.368322       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: set phase Failed
I0904 20:35:15.368330       1 pv_controller.go:858] updating PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: set phase Failed
I0904 20:35:15.370934       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" with version 2282
I0904 20:35:15.371106       1 pv_controller.go:879] volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" entered phase "Failed"
I0904 20:35:15.371213       1 pv_controller.go:901] volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
E0904 20:35:15.371258       1 goroutinemap.go:150] Operation for "delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]" failed. No retries permitted until 2022-09-04 20:35:15.871239975 +0000 UTC m=+784.958733642 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:35:15.371005       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" with version 2282
I0904 20:35:15.371318       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase: Failed, bound to: "azuredisk-5194/pvc-b6r46 (uid: ec917b63-cb6c-4359-b58f-522ee7c50d04)", boundByController: true
I0904 20:35:15.371357       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is bound to claim azuredisk-5194/pvc-b6r46
I0904 20:35:15.371393       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: claim azuredisk-5194/pvc-b6r46 not found
I0904 20:35:15.371511       1 pv_controller.go:1108] reclaimVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: policy is Delete
I0904 20:35:15.371559       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]]
I0904 20:35:15.371569       1 pv_controller.go:1765] operation "delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]" postponed due to exponential backoff
I0904 20:35:15.371046       1 pv_protection_controller.go:205] Got event on PV pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04
... skipping 25 lines ...
I0904 20:35:22.253922       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: already bound to "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36"
I0904 20:35:22.253967       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-ztthq] status: set phase Bound
I0904 20:35:22.254007       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-ztthq] status: phase Bound already set
I0904 20:35:22.254020       1 pv_controller.go:1038] volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" bound to claim "azuredisk-5194/pvc-ztthq"
I0904 20:35:22.254159       1 pv_controller.go:1039] volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
I0904 20:35:22.254179       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-ztthq" status after binding: phase: Bound, bound to: "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36", bindCompleted: true, boundByController: true
I0904 20:35:22.253497       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase: Failed, bound to: "azuredisk-5194/pvc-b6r46 (uid: ec917b63-cb6c-4359-b58f-522ee7c50d04)", boundByController: true
I0904 20:35:22.254293       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is bound to claim azuredisk-5194/pvc-b6r46
I0904 20:35:22.254328       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: claim azuredisk-5194/pvc-b6r46 not found
I0904 20:35:22.254339       1 pv_controller.go:1108] reclaimVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: policy is Delete
I0904 20:35:22.254379       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]]
I0904 20:35:22.254415       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 1969
I0904 20:35:22.254435       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase: Bound, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
... skipping 2 lines ...
I0904 20:35:22.254599       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: all is bound
I0904 20:35:22.254612       1 pv_controller.go:858] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: set phase Bound
I0904 20:35:22.254630       1 pv_controller.go:861] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase Bound already set
I0904 20:35:22.254692       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04] started
I0904 20:35:22.262213       1 pv_controller.go:1340] isVolumeReleased[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is released
I0904 20:35:22.262231       1 pv_controller.go:1404] doDeleteVolume [pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]
I0904 20:35:22.262279       1 pv_controller.go:1259] deletion of volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04) since it's in attaching or detaching state
I0904 20:35:22.262303       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: set phase Failed
I0904 20:35:22.262312       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase Failed already set
E0904 20:35:22.262365       1 goroutinemap.go:150] Operation for "delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]" failed. No retries permitted until 2022-09-04 20:35:23.262319076 +0000 UTC m=+792.349812743 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04) since it's in attaching or detaching state
I0904 20:35:22.271637       1 gc_controller.go:161] GC'ing orphaned
I0904 20:35:22.271667       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:35:22.365641       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-x4pd8 ReadyCondition updated. Updating timestamp.
I0904 20:35:22.386842       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:35:22.945666       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:35:26.210264       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
... skipping 19 lines ...
I0904 20:35:37.254278       1 pv_controller.go:861] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase Bound already set
I0904 20:35:37.254287       1 pv_controller.go:922] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: already bound to "azuredisk-5194/pvc-ztthq"
I0904 20:35:37.254289       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" with version 2282
I0904 20:35:37.254294       1 pv_controller.go:858] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: set phase Bound
I0904 20:35:37.254302       1 pv_controller.go:861] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase Bound already set
I0904 20:35:37.254310       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: binding to "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36"
I0904 20:35:37.254319       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase: Failed, bound to: "azuredisk-5194/pvc-b6r46 (uid: ec917b63-cb6c-4359-b58f-522ee7c50d04)", boundByController: true
I0904 20:35:37.254328       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5194/pvc-ztthq]: already bound to "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36"
I0904 20:35:37.254337       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-ztthq] status: set phase Bound
I0904 20:35:37.254337       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is bound to claim azuredisk-5194/pvc-b6r46
I0904 20:35:37.254355       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: claim azuredisk-5194/pvc-b6r46 not found
I0904 20:35:37.254371       1 pv_controller.go:1108] reclaimVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: policy is Delete
I0904 20:35:37.254385       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]]
... skipping 16 lines ...
I0904 20:35:42.480995       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04
I0904 20:35:42.481023       1 pv_controller.go:1435] volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" deleted
I0904 20:35:42.481052       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: success
I0904 20:35:42.490574       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" with version 2323
I0904 20:35:42.490689       1 pv_protection_controller.go:205] Got event on PV pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04
I0904 20:35:42.490787       1 pv_protection_controller.go:125] Processing PV pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04
I0904 20:35:42.490741       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: phase: Failed, bound to: "azuredisk-5194/pvc-b6r46 (uid: ec917b63-cb6c-4359-b58f-522ee7c50d04)", boundByController: true
I0904 20:35:42.491280       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: volume is bound to claim azuredisk-5194/pvc-b6r46
I0904 20:35:42.491307       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: claim azuredisk-5194/pvc-b6r46 not found
I0904 20:35:42.491318       1 pv_controller.go:1108] reclaimVolume[pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04]: policy is Delete
I0904 20:35:42.491333       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]]
I0904 20:35:42.491340       1 pv_controller.go:1763] operation "delete-pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04[1347a3cb-9875-42c7-aece-35165c9f0ae7]" is already running, skipping
I0904 20:35:42.495321       1 pv_controller_base.go:235] volume "pvc-ec917b63-cb6c-4359-b58f-522ee7c50d04" deleted
... skipping 146 lines ...
I0904 20:36:15.907821       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]]
I0904 20:36:15.907829       1 pv_controller.go:1763] operation "delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]" is already running, skipping
I0904 20:36:15.907843       1 pv_protection_controller.go:205] Got event on PV pvc-b317b49a-49f5-4f72-aa08-a24de3520d36
I0904 20:36:15.910107       1 pv_controller.go:1340] isVolumeReleased[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is released
I0904 20:36:15.910123       1 pv_controller.go:1404] doDeleteVolume [pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]
I0904 20:36:15.928288       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 on node "capz-l9y77r-md-0-qlvdg"
I0904 20:36:15.931509       1 pv_controller.go:1259] deletion of volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:36:15.931526       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: set phase Failed
I0904 20:36:15.931535       1 pv_controller.go:858] updating PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: set phase Failed
I0904 20:36:15.934040       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 2387
I0904 20:36:15.934365       1 pv_controller.go:879] volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" entered phase "Failed"
I0904 20:36:15.934547       1 pv_controller.go:901] volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
E0904 20:36:15.934592       1 goroutinemap.go:150] Operation for "delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]" failed. No retries permitted until 2022-09-04 20:36:16.4345751 +0000 UTC m=+845.522068867 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:36:15.934520       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 2387
I0904 20:36:15.934753       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase: Failed, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
I0904 20:36:15.934777       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is bound to claim azuredisk-5194/pvc-ztthq
I0904 20:36:15.934819       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: claim azuredisk-5194/pvc-ztthq not found
I0904 20:36:15.934826       1 pv_controller.go:1108] reclaimVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: policy is Delete
I0904 20:36:15.934837       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]]
I0904 20:36:15.934845       1 pv_controller.go:1765] operation "delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]" postponed due to exponential backoff
I0904 20:36:15.934532       1 pv_protection_controller.go:205] Got event on PV pvc-b317b49a-49f5-4f72-aa08-a24de3520d36
... skipping 10 lines ...
I0904 20:36:20.231880       1 azure_controller_standard.go:166] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36)
I0904 20:36:21.218153       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="60.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50436" resp=200
I0904 20:36:21.407229       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0904 20:36:22.187779       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:36:22.256148       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:36:22.256220       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 2387
I0904 20:36:22.256309       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase: Failed, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
I0904 20:36:22.256390       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is bound to claim azuredisk-5194/pvc-ztthq
I0904 20:36:22.256463       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: claim azuredisk-5194/pvc-ztthq not found
I0904 20:36:22.256552       1 pv_controller.go:1108] reclaimVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: policy is Delete
I0904 20:36:22.256632       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]]
I0904 20:36:22.256731       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b317b49a-49f5-4f72-aa08-a24de3520d36] started
I0904 20:36:22.260643       1 pv_controller.go:1340] isVolumeReleased[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is released
I0904 20:36:22.260793       1 pv_controller.go:1404] doDeleteVolume [pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]
I0904 20:36:22.260866       1 pv_controller.go:1259] deletion of volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) since it's in attaching or detaching state
I0904 20:36:22.260883       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: set phase Failed
I0904 20:36:22.260892       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase Failed already set
E0904 20:36:22.260919       1 goroutinemap.go:150] Operation for "delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]" failed. No retries permitted until 2022-09-04 20:36:23.260899834 +0000 UTC m=+852.348393601 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) since it's in attaching or detaching state
I0904 20:36:22.273818       1 gc_controller.go:161] GC'ing orphaned
I0904 20:36:22.273836       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:36:22.374604       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-qlvdg ReadyCondition updated. Updating timestamp.
I0904 20:36:22.388769       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:36:22.985414       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:36:31.218783       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="56.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35390" resp=200
I0904 20:36:35.954525       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) returned with <nil>
I0904 20:36:35.954558       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36) succeeded
I0904 20:36:35.954691       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36 was detached from node:capz-l9y77r-md-0-qlvdg
I0904 20:36:35.954818       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36") on node "capz-l9y77r-md-0-qlvdg" 
I0904 20:36:37.257106       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:36:37.257162       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 2387
I0904 20:36:37.257200       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase: Failed, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
I0904 20:36:37.257234       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is bound to claim azuredisk-5194/pvc-ztthq
I0904 20:36:37.257259       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: claim azuredisk-5194/pvc-ztthq not found
I0904 20:36:37.257271       1 pv_controller.go:1108] reclaimVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: policy is Delete
I0904 20:36:37.257286       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]]
I0904 20:36:37.257317       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b317b49a-49f5-4f72-aa08-a24de3520d36] started
I0904 20:36:37.266558       1 pv_controller.go:1340] isVolumeReleased[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is released
... skipping 3 lines ...
I0904 20:36:42.274677       1 gc_controller.go:161] GC'ing orphaned
I0904 20:36:42.274702       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:36:42.482275       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36
I0904 20:36:42.482304       1 pv_controller.go:1435] volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" deleted
I0904 20:36:42.482316       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: success
I0904 20:36:42.490288       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b317b49a-49f5-4f72-aa08-a24de3520d36" with version 2426
I0904 20:36:42.490551       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: phase: Failed, bound to: "azuredisk-5194/pvc-ztthq (uid: b317b49a-49f5-4f72-aa08-a24de3520d36)", boundByController: true
I0904 20:36:42.490603       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: volume is bound to claim azuredisk-5194/pvc-ztthq
I0904 20:36:42.490624       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: claim azuredisk-5194/pvc-ztthq not found
I0904 20:36:42.490632       1 pv_controller.go:1108] reclaimVolume[pvc-b317b49a-49f5-4f72-aa08-a24de3520d36]: policy is Delete
I0904 20:36:42.490647       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b317b49a-49f5-4f72-aa08-a24de3520d36[c0846382-2036-49f1-8e10-26980ce2cfb8]]
I0904 20:36:42.490691       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b317b49a-49f5-4f72-aa08-a24de3520d36] started
I0904 20:36:42.490885       1 pv_protection_controller.go:205] Got event on PV pvc-b317b49a-49f5-4f72-aa08-a24de3520d36
... skipping 38 lines ...
I0904 20:36:47.395191       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c", timestamp:time.Time{wall:0xc0bd60f7d6ce8960, ext:876470128103, loc:(*time.Location)(0x751a1a0)}}
I0904 20:36:47.397267       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c"
I0904 20:36:47.398183       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c" (15.632014ms)
I0904 20:36:47.398846       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c", timestamp:time.Time{wall:0xc0bd60f7d6ce8960, ext:876470128103, loc:(*time.Location)(0x751a1a0)}}
I0904 20:36:47.399042       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 20:36:47.398420       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" duration="19.856745ms"
I0904 20:36:47.399278       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-6tmqr\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:36:47.399319       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" startTime="2022-09-04 20:36:47.399301257 +0000 UTC m=+876.486795024"
I0904 20:36:47.400105       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-6tmqr" timed out (false) [last progress check: 2022-09-04 20:36:47 +0000 UTC - now: 2022-09-04 20:36:47.400098763 +0000 UTC m=+876.487592530]
I0904 20:36:47.404798       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-kgxxq" with version 2453
I0904 20:36:47.405000       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-kgxxq]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0904 20:36:47.405026       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-kgxxq]: no volume found
I0904 20:36:47.405035       1 pv_controller.go:1445] provisionClaim[azuredisk-1353/pvc-kgxxq]: started
... skipping 123 lines ...
I0904 20:36:51.445773       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-b6r46.1711c27e9850e044, uid 84fc807a-ef9b-4116-ae01-58c3f5bcae20, event type delete
I0904 20:36:51.448485       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-nngt5.1711c280e0e17f9c, uid 631f5f21-2de4-4de7-95eb-73382874819e, event type delete
I0904 20:36:51.450763       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-nngt5.1711c281755d0aa1, uid 213afaf7-05ca-4a0f-9e58-7fb08efb5973, event type delete
I0904 20:36:51.454224       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-ztthq.1711c27a38214b32, uid 383714c8-906b-438d-a21e-2de52cfde09e, event type delete
I0904 20:36:51.456984       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name pvc-ztthq.1711c27acfc81eff, uid 96794217-6104-41a4-af7f-af165620b51c, event type delete
I0904 20:36:51.469191       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-4q8qz, uid ede3e8fb-7d78-45fd-a3c3-6f223f169b61, event type delete
E0904 20:36:51.485014       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-cz5sw" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0904 20:36:51.511820       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5194, name kube-root-ca.crt, uid 578c775d-2db4-42e1-ac44-bc85b788de19, event type delete
I0904 20:36:51.516086       1 publisher.go:186] Finished syncing namespace "azuredisk-5194" (4.207531ms)
I0904 20:36:51.526275       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0904 20:36:51.526403       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid b403f8c2-3489-45fe-9658-92cc52635443, event type delete
I0904 20:36:51.526433       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (1.6µs)
I0904 20:36:51.567194       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (2µs)
... skipping 103 lines ...
I0904 20:37:02.752336       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c" (7.563855ms)
I0904 20:37:02.752362       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c", timestamp:time.Time{wall:0xc0bd60fbab2451ed, ext:891811294224, loc:(*time.Location)(0x751a1a0)}}
I0904 20:37:02.752399       1 controller_utils.go:938] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c-td98r in state Running, deletion time 2022-09-04 20:37:32 +0000 UTC
I0904 20:37:02.752510       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c" (150.101µs)
I0904 20:37:02.752592       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" duration="9.019066ms"
I0904 20:37:02.752620       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" startTime="2022-09-04 20:37:02.752602867 +0000 UTC m=+891.840096534"
W0904 20:37:02.755846       1 reconciler.go:385] Multi-Attach error for volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7") from node "capz-l9y77r-md-0-qlvdg" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c-td98r on node capz-l9y77r-md-0-x4pd8
I0904 20:37:02.755945       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr"
I0904 20:37:02.756200       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" duration="3.584026ms"
I0904 20:37:02.756272       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" startTime="2022-09-04 20:37:02.756218594 +0000 UTC m=+891.843712361"
I0904 20:37:02.756440       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7\" Volume is already used by pod(s) azuredisk-volume-tester-6tmqr-6c8c5d5c9c-td98r"
I0904 20:37:02.756749       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-6tmqr" for a progress check after 597s
I0904 20:37:02.756821       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-6tmqr" duration="553.504µs"
I0904 20:37:02.762594       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v"
I0904 20:37:02.762810       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v, PodDisruptionBudget controller will avoid syncing.
I0904 20:37:02.762954       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v"
I0904 20:37:02.763369       1 replica_set.go:443] Pod azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v updated, objectMeta {Name:azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v GenerateName:azuredisk-volume-tester-6tmqr-6c8c5d5c9c- Namespace:azuredisk-1353 SelfLink: UID:000f7430-a49a-47e5-b993-d1bcab508f38 ResourceVersion:2531 Generation:0 CreationTimestamp:2022-09-04 20:37:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:6c8c5d5c9c] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-6tmqr-6c8c5d5c9c UID:666190cf-26a1-4853-9c23-693879bfec81 Controller:0xc0027116ae BlockOwnerDeletion:0xc0027116af}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:37:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"666190cf-26a1-4853-9c23-693879bfec81\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-6tmqr-6c8c5d5c9c-srr2v GenerateName:azuredisk-volume-tester-6tmqr-6c8c5d5c9c- Namespace:azuredisk-1353 SelfLink: UID:000f7430-a49a-47e5-b993-d1bcab508f38 ResourceVersion:2539 Generation:0 CreationTimestamp:2022-09-04 20:37:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:6c8c5d5c9c] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-6tmqr-6c8c5d5c9c UID:666190cf-26a1-4853-9c23-693879bfec81 Controller:0xc0022dc09e BlockOwnerDeletion:0xc0022dc09f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:37:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"666190cf-26a1-4853-9c23-693879bfec81\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 20:37:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
... skipping 394 lines ...
I0904 20:38:35.129785       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: claim azuredisk-1353/pvc-kgxxq not found
I0904 20:38:35.129893       1 pv_controller.go:1108] reclaimVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: policy is Delete
I0904 20:38:35.129910       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]]
I0904 20:38:35.129917       1 pv_controller.go:1763] operation "delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]" is already running, skipping
I0904 20:38:35.131481       1 pv_controller.go:1340] isVolumeReleased[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is released
I0904 20:38:35.131497       1 pv_controller.go:1404] doDeleteVolume [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]
I0904 20:38:35.156501       1 pv_controller.go:1259] deletion of volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:38:35.156519       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: set phase Failed
I0904 20:38:35.156527       1 pv_controller.go:858] updating PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: set phase Failed
I0904 20:38:35.159101       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" with version 2708
I0904 20:38:35.159311       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase: Failed, bound to: "azuredisk-1353/pvc-kgxxq (uid: 8e894cbc-0071-4bee-9691-b47c081c0ab7)", boundByController: true
I0904 20:38:35.159483       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is bound to claim azuredisk-1353/pvc-kgxxq
I0904 20:38:35.159664       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: claim azuredisk-1353/pvc-kgxxq not found
I0904 20:38:35.159806       1 pv_controller.go:1108] reclaimVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: policy is Delete
I0904 20:38:35.159926       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]]
I0904 20:38:35.160082       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" with version 2708
I0904 20:38:35.160108       1 pv_controller.go:879] volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" entered phase "Failed"
I0904 20:38:35.160118       1 pv_controller.go:901] volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
E0904 20:38:35.160152       1 goroutinemap.go:150] Operation for "delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]" failed. No retries permitted until 2022-09-04 20:38:35.66013562 +0000 UTC m=+984.747629387 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:38:35.160091       1 pv_controller.go:1763] operation "delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]" is already running, skipping
I0904 20:38:35.160366       1 event.go:291] "Event occurred" object="pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted"
I0904 20:38:35.159341       1 pv_protection_controller.go:205] Got event on PV pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7
I0904 20:38:35.196167       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 on node "capz-l9y77r-md-0-qlvdg"
I0904 20:38:37.260014       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:38:37.260066       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" with version 2708
I0904 20:38:37.260102       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase: Failed, bound to: "azuredisk-1353/pvc-kgxxq (uid: 8e894cbc-0071-4bee-9691-b47c081c0ab7)", boundByController: true
I0904 20:38:37.260134       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is bound to claim azuredisk-1353/pvc-kgxxq
I0904 20:38:37.260154       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: claim azuredisk-1353/pvc-kgxxq not found
I0904 20:38:37.260162       1 pv_controller.go:1108] reclaimVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: policy is Delete
I0904 20:38:37.260178       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]]
I0904 20:38:37.260204       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7] started
I0904 20:38:37.268863       1 pv_controller.go:1340] isVolumeReleased[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is released
I0904 20:38:37.268878       1 pv_controller.go:1404] doDeleteVolume [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]
I0904 20:38:37.290350       1 pv_controller.go:1259] deletion of volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:38:37.290369       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: set phase Failed
I0904 20:38:37.290379       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase Failed already set
E0904 20:38:37.290404       1 goroutinemap.go:150] Operation for "delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]" failed. No retries permitted until 2022-09-04 20:38:38.29038758 +0000 UTC m=+987.377881247 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:38:37.393068       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:38:40.178370       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:38:40.178399       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 to the node "capz-l9y77r-md-0-qlvdg" mounted false
I0904 20:38:40.234352       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:38:40.234484       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 to the node "capz-l9y77r-md-0-qlvdg" mounted false
I0904 20:38:40.234818       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-l9y77r-md-0-qlvdg" succeeded. VolumesAttached: []
... skipping 10 lines ...
I0904 20:38:47.184163       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 79 items received
I0904 20:38:50.197163       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 0 items received
I0904 20:38:51.218182       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="67.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:60138" resp=200
I0904 20:38:52.191890       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:38:52.260241       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:38:52.260294       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" with version 2708
I0904 20:38:52.260333       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase: Failed, bound to: "azuredisk-1353/pvc-kgxxq (uid: 8e894cbc-0071-4bee-9691-b47c081c0ab7)", boundByController: true
I0904 20:38:52.260367       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is bound to claim azuredisk-1353/pvc-kgxxq
I0904 20:38:52.260384       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: claim azuredisk-1353/pvc-kgxxq not found
I0904 20:38:52.260392       1 pv_controller.go:1108] reclaimVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: policy is Delete
I0904 20:38:52.260405       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]]
I0904 20:38:52.260434       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7] started
I0904 20:38:52.265787       1 pv_controller.go:1340] isVolumeReleased[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is released
I0904 20:38:52.265804       1 pv_controller.go:1404] doDeleteVolume [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]
I0904 20:38:52.265835       1 pv_controller.go:1259] deletion of volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) since it's in attaching or detaching state
I0904 20:38:52.265856       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: set phase Failed
I0904 20:38:52.265865       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase Failed already set
E0904 20:38:52.265889       1 goroutinemap.go:150] Operation for "delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]" failed. No retries permitted until 2022-09-04 20:38:54.265872091 +0000 UTC m=+1003.353365758 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) since it's in attaching or detaching state
I0904 20:38:52.394010       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:38:53.073424       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:38:54.196996       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 10 items received
I0904 20:38:55.860275       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) returned with <nil>
I0904 20:38:55.860310       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7) succeeded
I0904 20:38:55.860320       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7 was detached from node:capz-l9y77r-md-0-qlvdg
... skipping 6 lines ...
I0904 20:39:02.255188       1 controller.go:731] It took 1.62e-05 seconds to finish nodeSyncInternal
I0904 20:39:02.279340       1 gc_controller.go:161] GC'ing orphaned
I0904 20:39:02.279358       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:39:04.222914       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 45 items received
I0904 20:39:07.260878       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:39:07.261057       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" with version 2708
I0904 20:39:07.261148       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase: Failed, bound to: "azuredisk-1353/pvc-kgxxq (uid: 8e894cbc-0071-4bee-9691-b47c081c0ab7)", boundByController: true
I0904 20:39:07.261211       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is bound to claim azuredisk-1353/pvc-kgxxq
I0904 20:39:07.261234       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: claim azuredisk-1353/pvc-kgxxq not found
I0904 20:39:07.261242       1 pv_controller.go:1108] reclaimVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: policy is Delete
I0904 20:39:07.261258       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]]
I0904 20:39:07.261293       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7] started
I0904 20:39:07.268373       1 pv_controller.go:1340] isVolumeReleased[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is released
I0904 20:39:07.268390       1 pv_controller.go:1404] doDeleteVolume [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]
I0904 20:39:07.394941       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:39:11.218573       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="58.5µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38828" resp=200
I0904 20:39:12.480585       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7
I0904 20:39:12.480614       1 pv_controller.go:1435] volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" deleted
I0904 20:39:12.480625       1 pv_controller.go:1283] deleteVolumeOperation [pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: success
I0904 20:39:12.488331       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7" with version 2763
I0904 20:39:12.488613       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: phase: Failed, bound to: "azuredisk-1353/pvc-kgxxq (uid: 8e894cbc-0071-4bee-9691-b47c081c0ab7)", boundByController: true
I0904 20:39:12.488811       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: volume is bound to claim azuredisk-1353/pvc-kgxxq
I0904 20:39:12.489117       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: claim azuredisk-1353/pvc-kgxxq not found
I0904 20:39:12.489228       1 pv_controller.go:1108] reclaimVolume[pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7]: policy is Delete
I0904 20:39:12.489334       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]]
I0904 20:39:12.489418       1 pv_controller.go:1763] operation "delete-pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7[39a207ef-9107-4bbf-91b6-087672e38082]" is already running, skipping
I0904 20:39:12.488502       1 pv_protection_controller.go:205] Got event on PV pvc-8e894cbc-0071-4bee-9691-b47c081c0ab7
... skipping 289 lines ...
I0904 20:39:35.627638       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-w62mw" with version 2881
I0904 20:39:35.627931       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457 StorageAccountType:StandardSSD_LRS Size:10
I0904 20:39:35.628092       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-59/pvc-w62mw"
I0904 20:39:35.629926       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7 StorageAccountType:StandardSSD_LRS Size:10
I0904 20:39:36.358897       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4538
I0904 20:39:36.404364       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4538, name default-token-ms6cz, uid 70168ca7-4d37-4a81-b4c7-b3d0e4d08683, event type delete
E0904 20:39:36.416181       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4538/default: secrets "default-token-g42lh" is forbidden: unable to create new content in namespace azuredisk-4538 because it is being terminated
I0904 20:39:36.428024       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4538/default), service account deleted, removing tokens
I0904 20:39:36.428112       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4538, name default, uid 578e6888-c67a-4ee8-8f36-c5716632d63c, event type delete
I0904 20:39:36.428144       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4538" (1.9µs)
I0904 20:39:36.433191       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4538, name pvc-47m2j.1711c2d55dc87cff, uid 120f216c-b1a0-4999-8a63-688e53e58821, event type delete
I0904 20:39:36.446209       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4538, name kube-root-ca.crt, uid e51b2c1e-961f-4543-ae77-aa31ad2d5368, event type delete
I0904 20:39:36.450325       1 publisher.go:186] Finished syncing namespace "azuredisk-4538" (3.928129ms)
... skipping 19 lines ...
I0904 20:39:37.262645       1 pv_controller.go:1445] provisionClaim[azuredisk-59/pvc-w62mw]: started
I0904 20:39:37.262654       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-59/pvc-w62mw[6b1920e7-40e5-4080-a018-83519d0670a7]]
I0904 20:39:37.262661       1 pv_controller.go:1763] operation "provision-azuredisk-59/pvc-w62mw[6b1920e7-40e5-4080-a018-83519d0670a7]" is already running, skipping
I0904 20:39:37.377596       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8266
I0904 20:39:37.396142       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:39:37.410981       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-f6h2g, uid 15180907-29f7-48f6-8773-9c68995866a5, event type delete
E0904 20:39:37.421632       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-mlwv5" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0904 20:39:37.444417       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8266, name kube-root-ca.crt, uid 33f21aa6-d3e5-4195-be44-66c7ae419dba, event type delete
I0904 20:39:37.447776       1 publisher.go:186] Finished syncing namespace "azuredisk-8266" (3.303024ms)
I0904 20:39:37.486997       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0904 20:39:37.487043       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid dfaf3c11-8ddf-4d42-9894-e4a0dfabfa87, event type delete
I0904 20:39:37.487073       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.4µs)
I0904 20:39:37.507146       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8266, estimate: 0, errors: <nil>
... skipping 182 lines ...
I0904 20:39:38.924000       1 pv_controller.go:1039] volume "pvc-43fd8839-7514-4832-bbba-a16f1d822457" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-d72k7 (uid: 43fd8839-7514-4832-bbba-a16f1d822457)", boundByController: true
I0904 20:39:38.924017       1 pv_controller.go:1040] claim "azuredisk-59/pvc-d72k7" status after binding: phase: Bound, bound to: "pvc-43fd8839-7514-4832-bbba-a16f1d822457", bindCompleted: true, boundByController: true
I0904 20:39:39.364083       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7996
I0904 20:39:39.416403       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7996, name kube-root-ca.crt, uid 5b6abf26-c11a-4f2f-a85c-da6233711fc0, event type delete
I0904 20:39:39.419754       1 publisher.go:186] Finished syncing namespace "azuredisk-7996" (3.080022ms)
I0904 20:39:39.475960       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7996, name default-token-fcb8d, uid 8aeaa5a8-99d0-4aa3-a510-8fbbabddab1c, event type delete
E0904 20:39:39.489980       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7996/default: secrets "default-token-ppccx" is forbidden: unable to create new content in namespace azuredisk-7996 because it is being terminated
I0904 20:39:39.499748       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7996/default), service account deleted, removing tokens
I0904 20:39:39.499937       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7996, name default, uid a6aacfc1-542c-42c1-a78d-b585aeae4f30, event type delete
I0904 20:39:39.500050       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (1.8µs)
I0904 20:39:39.505269       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7996, estimate: 0, errors: <nil>
I0904 20:39:39.505589       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (1.5µs)
I0904 20:39:39.511716       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7996" (150.542898ms)
... skipping 303 lines ...
I0904 20:40:09.024207       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]]
I0904 20:40:09.024238       1 pv_controller.go:1763] operation "delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]" is already running, skipping
I0904 20:40:09.023718       1 pv_protection_controller.go:205] Got event on PV pvc-6b1920e7-40e5-4080-a018-83519d0670a7
I0904 20:40:09.023954       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6b1920e7-40e5-4080-a018-83519d0670a7] started
I0904 20:40:09.027239       1 pv_controller.go:1340] isVolumeReleased[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is released
I0904 20:40:09.027261       1 pv_controller.go:1404] doDeleteVolume [pvc-6b1920e7-40e5-4080-a018-83519d0670a7]
I0904 20:40:09.066249       1 pv_controller.go:1259] deletion of volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:09.066272       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: set phase Failed
I0904 20:40:09.066280       1 pv_controller.go:858] updating PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: set phase Failed
I0904 20:40:09.069458       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 2997
I0904 20:40:09.069486       1 pv_controller.go:879] volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" entered phase "Failed"
I0904 20:40:09.069494       1 pv_controller.go:901] volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:09.069741       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 2997
I0904 20:40:09.069920       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase: Failed, bound to: "azuredisk-59/pvc-w62mw (uid: 6b1920e7-40e5-4080-a018-83519d0670a7)", boundByController: true
I0904 20:40:09.069962       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is bound to claim azuredisk-59/pvc-w62mw
I0904 20:40:09.070052       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: claim azuredisk-59/pvc-w62mw not found
I0904 20:40:09.070066       1 pv_controller.go:1108] reclaimVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: policy is Delete
I0904 20:40:09.070095       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]]
I0904 20:40:09.069749       1 pv_protection_controller.go:205] Got event on PV pvc-6b1920e7-40e5-4080-a018-83519d0670a7
E0904 20:40:09.069805       1 goroutinemap.go:150] Operation for "delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]" failed. No retries permitted until 2022-09-04 20:40:09.569779339 +0000 UTC m=+1078.657273106 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:09.070164       1 pv_controller.go:1765] operation "delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]" postponed due to exponential backoff
I0904 20:40:09.070235       1 event.go:291] "Event occurred" object="pvc-6b1920e7-40e5-4080-a018-83519d0670a7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted"
I0904 20:40:10.246428       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-qlvdg"
I0904 20:40:10.246484       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7 to the node "capz-l9y77r-md-0-qlvdg" mounted false
I0904 20:40:10.246494       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6 to the node "capz-l9y77r-md-0-qlvdg" mounted false
I0904 20:40:10.246502       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457 to the node "capz-l9y77r-md-0-qlvdg" mounted false
... skipping 67 lines ...
I0904 20:40:22.265827       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: volume is bound to claim azuredisk-59/pvc-gbcnb
I0904 20:40:22.265911       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: claim azuredisk-59/pvc-gbcnb found: phase: Bound, bound to: "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6", bindCompleted: true, boundByController: true
I0904 20:40:22.266005       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: all is bound
I0904 20:40:22.266089       1 pv_controller.go:858] updating PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: set phase Bound
I0904 20:40:22.266121       1 pv_controller.go:861] updating PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: phase Bound already set
I0904 20:40:22.266157       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 2997
I0904 20:40:22.266179       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase: Failed, bound to: "azuredisk-59/pvc-w62mw (uid: 6b1920e7-40e5-4080-a018-83519d0670a7)", boundByController: true
I0904 20:40:22.266215       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is bound to claim azuredisk-59/pvc-w62mw
I0904 20:40:22.266237       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: claim azuredisk-59/pvc-w62mw not found
I0904 20:40:22.266262       1 pv_controller.go:1108] reclaimVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: policy is Delete
I0904 20:40:22.266281       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]]
I0904 20:40:22.266300       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-43fd8839-7514-4832-bbba-a16f1d822457" with version 2918
I0904 20:40:22.266317       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-43fd8839-7514-4832-bbba-a16f1d822457]: phase: Bound, bound to: "azuredisk-59/pvc-d72k7 (uid: 43fd8839-7514-4832-bbba-a16f1d822457)", boundByController: true
... skipping 4 lines ...
I0904 20:40:22.266390       1 pv_controller.go:861] updating PersistentVolume[pvc-43fd8839-7514-4832-bbba-a16f1d822457]: phase Bound already set
I0904 20:40:22.266422       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6b1920e7-40e5-4080-a018-83519d0670a7] started
I0904 20:40:22.268865       1 pv_controller.go:1340] isVolumeReleased[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is released
I0904 20:40:22.268882       1 pv_controller.go:1404] doDeleteVolume [pvc-6b1920e7-40e5-4080-a018-83519d0670a7]
I0904 20:40:22.282217       1 gc_controller.go:161] GC'ing orphaned
I0904 20:40:22.282235       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:40:22.331458       1 pv_controller.go:1259] deletion of volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:22.331478       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: set phase Failed
I0904 20:40:22.331488       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase Failed already set
E0904 20:40:22.331533       1 goroutinemap.go:150] Operation for "delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]" failed. No retries permitted until 2022-09-04 20:40:23.331496205 +0000 UTC m=+1092.418989972 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:22.398349       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:40:23.115293       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:40:25.881906       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6) returned with <nil>
I0904 20:40:25.881945       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6) succeeded
I0904 20:40:25.881955       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6 was detached from node:capz-l9y77r-md-0-qlvdg
I0904 20:40:25.882000       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6") on node "capz-l9y77r-md-0-qlvdg" 
I0904 20:40:25.912827       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457"
I0904 20:40:25.912851       1 azure_controller_standard.go:166] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457)
I0904 20:40:31.218608       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="54.8µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55698" resp=200
I0904 20:40:37.264953       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:40:37.265066       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 2997
I0904 20:40:37.265126       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase: Failed, bound to: "azuredisk-59/pvc-w62mw (uid: 6b1920e7-40e5-4080-a018-83519d0670a7)", boundByController: true
I0904 20:40:37.265200       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is bound to claim azuredisk-59/pvc-w62mw
I0904 20:40:37.265226       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: claim azuredisk-59/pvc-w62mw not found
I0904 20:40:37.265238       1 pv_controller.go:1108] reclaimVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: policy is Delete
I0904 20:40:37.265252       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]]
I0904 20:40:37.265297       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-43fd8839-7514-4832-bbba-a16f1d822457" with version 2918
I0904 20:40:37.265331       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-43fd8839-7514-4832-bbba-a16f1d822457]: phase: Bound, bound to: "azuredisk-59/pvc-d72k7 (uid: 43fd8839-7514-4832-bbba-a16f1d822457)", boundByController: true
... skipping 41 lines ...
I0904 20:40:37.268296       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-gbcnb] status: phase Bound already set
I0904 20:40:37.268370       1 pv_controller.go:1038] volume "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6" bound to claim "azuredisk-59/pvc-gbcnb"
I0904 20:40:37.268456       1 pv_controller.go:1039] volume "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-gbcnb (uid: 46381947-f1dd-4bfd-baa9-b1620a9c11f6)", boundByController: true
I0904 20:40:37.268534       1 pv_controller.go:1040] claim "azuredisk-59/pvc-gbcnb" status after binding: phase: Bound, bound to: "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6", bindCompleted: true, boundByController: true
I0904 20:40:37.275006       1 pv_controller.go:1340] isVolumeReleased[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is released
I0904 20:40:37.275023       1 pv_controller.go:1404] doDeleteVolume [pvc-6b1920e7-40e5-4080-a018-83519d0670a7]
I0904 20:40:37.295117       1 pv_controller.go:1259] deletion of volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:37.295135       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: set phase Failed
I0904 20:40:37.295145       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase Failed already set
E0904 20:40:37.295171       1 goroutinemap.go:150] Operation for "delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]" failed. No retries permitted until 2022-09-04 20:40:39.295152999 +0000 UTC m=+1108.382646666 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:40:37.399213       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:40:40.898338       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0904 20:40:41.218612       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="157.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34676" resp=200
I0904 20:40:41.425328       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457) returned with <nil>
I0904 20:40:41.425565       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457) succeeded
I0904 20:40:41.425688       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-43fd8839-7514-4832-bbba-a16f1d822457 was detached from node:capz-l9y77r-md-0-qlvdg
... skipping 27 lines ...
I0904 20:40:52.266288       1 pv_controller.go:858] updating PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: set phase Bound
I0904 20:40:52.266292       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-59/pvc-d72k7]: binding to "pvc-43fd8839-7514-4832-bbba-a16f1d822457"
I0904 20:40:52.266296       1 pv_controller.go:861] updating PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: phase Bound already set
I0904 20:40:52.266309       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 2997
I0904 20:40:52.266312       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-59/pvc-d72k7]: already bound to "pvc-43fd8839-7514-4832-bbba-a16f1d822457"
I0904 20:40:52.266321       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-59/pvc-d72k7] status: set phase Bound
I0904 20:40:52.266328       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase: Failed, bound to: "azuredisk-59/pvc-w62mw (uid: 6b1920e7-40e5-4080-a018-83519d0670a7)", boundByController: true
I0904 20:40:52.266343       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-d72k7] status: phase Bound already set
I0904 20:40:52.266343       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is bound to claim azuredisk-59/pvc-w62mw
I0904 20:40:52.266354       1 pv_controller.go:1038] volume "pvc-43fd8839-7514-4832-bbba-a16f1d822457" bound to claim "azuredisk-59/pvc-d72k7"
I0904 20:40:52.266364       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: claim azuredisk-59/pvc-w62mw not found
I0904 20:40:52.266370       1 pv_controller.go:1108] reclaimVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: policy is Delete
I0904 20:40:52.266371       1 pv_controller.go:1039] volume "pvc-43fd8839-7514-4832-bbba-a16f1d822457" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-d72k7 (uid: 43fd8839-7514-4832-bbba-a16f1d822457)", boundByController: true
... skipping 22 lines ...
I0904 20:40:52.266545       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-gbcnb] status: phase Bound already set
I0904 20:40:52.266556       1 pv_controller.go:1038] volume "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6" bound to claim "azuredisk-59/pvc-gbcnb"
I0904 20:40:52.266571       1 pv_controller.go:1039] volume "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-gbcnb (uid: 46381947-f1dd-4bfd-baa9-b1620a9c11f6)", boundByController: true
I0904 20:40:52.266584       1 pv_controller.go:1040] claim "azuredisk-59/pvc-gbcnb" status after binding: phase: Bound, bound to: "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6", bindCompleted: true, boundByController: true
I0904 20:40:52.270618       1 pv_controller.go:1340] isVolumeReleased[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is released
I0904 20:40:52.270634       1 pv_controller.go:1404] doDeleteVolume [pvc-6b1920e7-40e5-4080-a018-83519d0670a7]
I0904 20:40:52.270681       1 pv_controller.go:1259] deletion of volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) since it's in attaching or detaching state
I0904 20:40:52.270794       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: set phase Failed
I0904 20:40:52.270808       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase Failed already set
E0904 20:40:52.270846       1 goroutinemap.go:150] Operation for "delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]" failed. No retries permitted until 2022-09-04 20:40:56.270815661 +0000 UTC m=+1125.358309328 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) since it's in attaching or detaching state
I0904 20:40:52.399479       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:40:53.131060       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:40:57.011075       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) returned with <nil>
I0904 20:40:57.011110       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7) succeeded
I0904 20:40:57.011119       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7 was detached from node:capz-l9y77r-md-0-qlvdg
I0904 20:40:57.011267       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7") on node "capz-l9y77r-md-0-qlvdg" 
... skipping 26 lines ...
I0904 20:41:07.267654       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: claim azuredisk-59/pvc-gbcnb found: phase: Bound, bound to: "pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6", bindCompleted: true, boundByController: true
I0904 20:41:07.267859       1 pv_controller.go:1040] claim "azuredisk-59/pvc-d72k7" status after binding: phase: Bound, bound to: "pvc-43fd8839-7514-4832-bbba-a16f1d822457", bindCompleted: true, boundByController: true
I0904 20:41:07.267914       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: all is bound
I0904 20:41:07.268022       1 pv_controller.go:858] updating PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: set phase Bound
I0904 20:41:07.268090       1 pv_controller.go:861] updating PersistentVolume[pvc-46381947-f1dd-4bfd-baa9-b1620a9c11f6]: phase Bound already set
I0904 20:41:07.268165       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 2997
I0904 20:41:07.268261       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase: Failed, bound to: "azuredisk-59/pvc-w62mw (uid: 6b1920e7-40e5-4080-a018-83519d0670a7)", boundByController: true
I0904 20:41:07.268339       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is bound to claim azuredisk-59/pvc-w62mw
I0904 20:41:07.268434       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: claim azuredisk-59/pvc-w62mw not found
I0904 20:41:07.268534       1 pv_controller.go:1108] reclaimVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: policy is Delete
I0904 20:41:07.268638       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]]
I0904 20:41:07.268742       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-43fd8839-7514-4832-bbba-a16f1d822457" with version 2918
I0904 20:41:07.268846       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-43fd8839-7514-4832-bbba-a16f1d822457]: phase: Bound, bound to: "azuredisk-59/pvc-d72k7 (uid: 43fd8839-7514-4832-bbba-a16f1d822457)", boundByController: true
... skipping 26 lines ...
I0904 20:41:11.218430       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="52.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58358" resp=200
I0904 20:41:12.190317       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0904 20:41:12.459977       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-6b1920e7-40e5-4080-a018-83519d0670a7
I0904 20:41:12.460066       1 pv_controller.go:1435] volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" deleted
I0904 20:41:12.460107       1 pv_controller.go:1283] deleteVolumeOperation [pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: success
I0904 20:41:12.465051       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6b1920e7-40e5-4080-a018-83519d0670a7" with version 3092
I0904 20:41:12.465236       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: phase: Failed, bound to: "azuredisk-59/pvc-w62mw (uid: 6b1920e7-40e5-4080-a018-83519d0670a7)", boundByController: true
I0904 20:41:12.465333       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: volume is bound to claim azuredisk-59/pvc-w62mw
I0904 20:41:12.465407       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: claim azuredisk-59/pvc-w62mw not found
I0904 20:41:12.465437       1 pv_controller.go:1108] reclaimVolume[pvc-6b1920e7-40e5-4080-a018-83519d0670a7]: policy is Delete
I0904 20:41:12.465491       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6b1920e7-40e5-4080-a018-83519d0670a7[007e0a54-cfc1-431d-bdb9-3391b5e240a1]]
I0904 20:41:12.465246       1 pv_protection_controller.go:205] Got event on PV pvc-6b1920e7-40e5-4080-a018-83519d0670a7
I0904 20:41:12.465600       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6b1920e7-40e5-4080-a018-83519d0670a7] started
... skipping 549 lines ...
I0904 20:41:56.091707       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: claim azuredisk-2546/pvc-cfjfq not found
I0904 20:41:56.091717       1 pv_controller.go:1108] reclaimVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: policy is Delete
I0904 20:41:56.091737       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]]
I0904 20:41:56.091745       1 pv_controller.go:1763] operation "delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]" is already running, skipping
I0904 20:41:56.093148       1 pv_controller.go:1340] isVolumeReleased[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: volume is released
I0904 20:41:56.093305       1 pv_controller.go:1404] doDeleteVolume [pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]
I0904 20:41:56.127671       1 pv_controller.go:1259] deletion of volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:41:56.127690       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: set phase Failed
I0904 20:41:56.127698       1 pv_controller.go:858] updating PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: set phase Failed
I0904 20:41:56.130762       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" with version 3240
I0904 20:41:56.130787       1 pv_controller.go:879] volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" entered phase "Failed"
I0904 20:41:56.130796       1 pv_controller.go:901] volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
E0904 20:41:56.131168       1 goroutinemap.go:150] Operation for "delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]" failed. No retries permitted until 2022-09-04 20:41:56.630811432 +0000 UTC m=+1185.718305199 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-qlvdg), could not be deleted
I0904 20:41:56.131461       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" with version 3240
I0904 20:41:56.131489       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: phase: Failed, bound to: "azuredisk-2546/pvc-cfjfq (uid: c47edc0b-fd52-47a8-bbc1-87a81f3658d6)", boundByController: true
I0904 20:41:56.131513       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: volume is bound to claim azuredisk-2546/pvc-cfjfq
I0904 20:41:56.131657       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: claim azuredisk-2546/pvc-cfjfq not found
I0904 20:41:56.131668       1 pv_controller.go:1108] reclaimVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: policy is Delete
I0904 20:41:56.131680       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]]
I0904 20:41:56.131688       1 pv_controller.go:1765] operation "delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]" postponed due to exponential backoff
I0904 20:41:56.131578       1 pv_protection_controller.go:205] Got event on PV pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6
... skipping 40 lines ...
I0904 20:42:07.270729       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: volume is bound to claim azuredisk-2546/pvc-5qlkj
I0904 20:42:07.270763       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: claim azuredisk-2546/pvc-5qlkj found: phase: Bound, bound to: "pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5", bindCompleted: true, boundByController: true
I0904 20:42:07.270778       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: all is bound
I0904 20:42:07.270812       1 pv_controller.go:858] updating PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: set phase Bound
I0904 20:42:07.270824       1 pv_controller.go:861] updating PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: phase Bound already set
I0904 20:42:07.270855       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" with version 3240
I0904 20:42:07.270915       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: phase: Failed, bound to: "azuredisk-2546/pvc-cfjfq (uid: c47edc0b-fd52-47a8-bbc1-87a81f3658d6)", boundByController: true
I0904 20:42:07.270996       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: volume is bound to claim azuredisk-2546/pvc-cfjfq
I0904 20:42:07.271076       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: claim azuredisk-2546/pvc-cfjfq not found
I0904 20:42:07.271094       1 pv_controller.go:1108] reclaimVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: policy is Delete
I0904 20:42:07.271108       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]]
I0904 20:42:07.271134       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6] started
I0904 20:42:07.270621       1 pv_controller.go:1038] volume "pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5" bound to claim "azuredisk-2546/pvc-5qlkj"
I0904 20:42:07.271329       1 pv_controller.go:1039] volume "pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-5qlkj (uid: de54a31f-3c4f-4553-b269-e974f1ad95a5)", boundByController: true
I0904 20:42:07.271419       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-5qlkj" status after binding: phase: Bound, bound to: "pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5", bindCompleted: true, boundByController: true
I0904 20:42:07.279298       1 pv_controller.go:1340] isVolumeReleased[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: volume is released
I0904 20:42:07.279313       1 pv_controller.go:1404] doDeleteVolume [pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]
I0904 20:42:07.279345       1 pv_controller.go:1259] deletion of volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) since it's in attaching or detaching state
I0904 20:42:07.279358       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: set phase Failed
I0904 20:42:07.279368       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: phase Failed already set
E0904 20:42:07.279393       1 goroutinemap.go:150] Operation for "delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]" failed. No retries permitted until 2022-09-04 20:42:08.279375495 +0000 UTC m=+1197.366869462 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) since it's in attaching or detaching state
I0904 20:42:07.402541       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:42:11.218282       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="54.7µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55138" resp=200
I0904 20:42:15.983462       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-qlvdg) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) returned with <nil>
I0904 20:42:15.983500       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6) succeeded
I0904 20:42:15.983509       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6 was detached from node:capz-l9y77r-md-0-qlvdg
I0904 20:42:15.983555       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6") on node "capz-l9y77r-md-0-qlvdg" 
... skipping 14 lines ...
I0904 20:42:22.271083       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: volume is bound to claim azuredisk-2546/pvc-5qlkj
I0904 20:42:22.271115       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: claim azuredisk-2546/pvc-5qlkj found: phase: Bound, bound to: "pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5", bindCompleted: true, boundByController: true
I0904 20:42:22.271134       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: all is bound
I0904 20:42:22.271144       1 pv_controller.go:858] updating PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: set phase Bound
I0904 20:42:22.271155       1 pv_controller.go:861] updating PersistentVolume[pvc-de54a31f-3c4f-4553-b269-e974f1ad95a5]: phase Bound already set
I0904 20:42:22.271187       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" with version 3240
I0904 20:42:22.271208       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: phase: Failed, bound to: "azuredisk-2546/pvc-cfjfq (uid: c47edc0b-fd52-47a8-bbc1-87a81f3658d6)", boundByController: true
I0904 20:42:22.271226       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: volume is bound to claim azuredisk-2546/pvc-cfjfq
I0904 20:42:22.271256       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: claim azuredisk-2546/pvc-cfjfq not found
I0904 20:42:22.271268       1 pv_controller.go:1108] reclaimVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: policy is Delete
I0904 20:42:22.271283       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]]
I0904 20:42:22.271309       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6] started
I0904 20:42:22.271397       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-5qlkj" with version 3165
... skipping 20 lines ...
I0904 20:42:22.403101       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:42:23.183013       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:42:27.541501       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6
I0904 20:42:27.541531       1 pv_controller.go:1435] volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" deleted
I0904 20:42:27.541543       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: success
I0904 20:42:27.555227       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6" with version 3288
I0904 20:42:27.555268       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: phase: Failed, bound to: "azuredisk-2546/pvc-cfjfq (uid: c47edc0b-fd52-47a8-bbc1-87a81f3658d6)", boundByController: true
I0904 20:42:27.555294       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: volume is bound to claim azuredisk-2546/pvc-cfjfq
I0904 20:42:27.555316       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: claim azuredisk-2546/pvc-cfjfq not found
I0904 20:42:27.555325       1 pv_controller.go:1108] reclaimVolume[pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6]: policy is Delete
I0904 20:42:27.555339       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]]
I0904 20:42:27.555345       1 pv_controller.go:1763] operation "delete-pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6[23db93e2-3fa8-47b1-bdf7-ab8f5c03c59a]" is already running, skipping
I0904 20:42:27.555360       1 pv_protection_controller.go:205] Got event on PV pvc-c47edc0b-fd52-47a8-bbc1-87a81f3658d6
... skipping 186 lines ...
I0904 20:42:47.371038       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name pvc-5qlkj.1711c2f61502ac63, uid 4b7e8859-eb65-410d-88bd-f7a2094ca17a, event type delete
I0904 20:42:47.373891       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name pvc-cfjfq.1711c2f57ce4ccb2, uid b9dd7c07-2d01-4f18-8554-646c0e388db4, event type delete
I0904 20:42:47.376364       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name pvc-cfjfq.1711c2f6161bef41, uid 26fb2ea8-b69b-4ea6-9cbb-f7a34123c861, event type delete
I0904 20:42:47.383581       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2546, name kube-root-ca.crt, uid 6831fae4-4a8f-467b-bb0d-eaebc6388900, event type delete
I0904 20:42:47.388182       1 publisher.go:186] Finished syncing namespace "azuredisk-2546" (4.562334ms)
I0904 20:42:47.462684       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2546, name default-token-9wxp9, uid 750fd3de-6286-4cd8-b3fd-0c6146363de5, event type delete
E0904 20:42:47.473548       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2546/default: secrets "default-token-mgltb" is forbidden: unable to create new content in namespace azuredisk-2546 because it is being terminated
I0904 20:42:47.482594       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2546/default), service account deleted, removing tokens
I0904 20:42:47.482660       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2546, name default, uid 78f8004a-b520-4986-b0a3-dc6d4d05ef32, event type delete
I0904 20:42:47.482699       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2546" (2.1µs)
I0904 20:42:47.488113       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2546" (1.5µs)
I0904 20:42:47.489099       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2546, estimate: 0, errors: <nil>
I0904 20:42:47.499196       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2546" (183.805043ms)
... skipping 488 lines ...
I0904 20:43:16.824216       1 pv_controller.go:1108] reclaimVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: policy is Delete
I0904 20:43:16.824328       1 pv_controller.go:1752] scheduleOperation[delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]]
I0904 20:43:16.824978       1 pv_controller.go:1763] operation "delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]" is already running, skipping
I0904 20:43:16.827184       1 pv_protection_controller.go:205] Got event on PV pvc-633fd388-d680-4db1-b72e-87378148b0aa
I0904 20:43:16.827446       1 pv_controller.go:1340] isVolumeReleased[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is released
I0904 20:43:16.827463       1 pv_controller.go:1404] doDeleteVolume [pvc-633fd388-d680-4db1-b72e-87378148b0aa]
I0904 20:43:16.848361       1 pv_controller.go:1259] deletion of volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:43:16.848382       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: set phase Failed
I0904 20:43:16.848391       1 pv_controller.go:858] updating PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: set phase Failed
I0904 20:43:16.851792       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" with version 3472
I0904 20:43:16.852137       1 pv_controller.go:879] volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" entered phase "Failed"
I0904 20:43:16.852284       1 pv_controller.go:901] volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:43:16.851835       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" with version 3472
I0904 20:43:16.852428       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase: Failed, bound to: "azuredisk-8582/pvc-fwxnk (uid: 633fd388-d680-4db1-b72e-87378148b0aa)", boundByController: true
I0904 20:43:16.852456       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is bound to claim azuredisk-8582/pvc-fwxnk
I0904 20:43:16.852494       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: claim azuredisk-8582/pvc-fwxnk not found
I0904 20:43:16.852503       1 pv_controller.go:1108] reclaimVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: policy is Delete
I0904 20:43:16.852518       1 pv_controller.go:1752] scheduleOperation[delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]]
I0904 20:43:16.851848       1 pv_protection_controller.go:205] Got event on PV pvc-633fd388-d680-4db1-b72e-87378148b0aa
E0904 20:43:16.852553       1 goroutinemap.go:150] Operation for "delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]" failed. No retries permitted until 2022-09-04 20:43:17.352317751 +0000 UTC m=+1266.439811518 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:43:16.852564       1 pv_controller.go:1765] operation "delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]" postponed due to exponential backoff
I0904 20:43:16.852625       1 event.go:291] "Event occurred" object="pvc-633fd388-d680-4db1-b72e-87378148b0aa" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted"
I0904 20:43:17.863632       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-md-0-x4pd8"
I0904 20:43:17.863662       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18 to the node "capz-l9y77r-md-0-x4pd8" mounted false
I0904 20:43:17.863671       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa to the node "capz-l9y77r-md-0-x4pd8" mounted false
I0904 20:43:17.863694       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe to the node "capz-l9y77r-md-0-x4pd8" mounted false
... skipping 48 lines ...
I0904 20:43:22.274313       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-4h26g] status: set phase Bound
I0904 20:43:22.274351       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-4h26g] status: phase Bound already set
I0904 20:43:22.274368       1 pv_controller.go:1038] volume "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18" bound to claim "azuredisk-8582/pvc-4h26g"
I0904 20:43:22.274387       1 pv_controller.go:1039] volume "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-4h26g (uid: e0a430dc-7b5b-440d-a7db-615caaa91f18)", boundByController: true
I0904 20:43:22.274423       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-4h26g" status after binding: phase: Bound, bound to: "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18", bindCompleted: true, boundByController: true
I0904 20:43:22.274443       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-8v7d2" with version 3382
I0904 20:43:22.274478       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase: Failed, bound to: "azuredisk-8582/pvc-fwxnk (uid: 633fd388-d680-4db1-b72e-87378148b0aa)", boundByController: true
I0904 20:43:22.274508       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is bound to claim azuredisk-8582/pvc-fwxnk
I0904 20:43:22.274532       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: claim azuredisk-8582/pvc-fwxnk not found
I0904 20:43:22.274562       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8582/pvc-8v7d2]: phase: Bound, bound to: "pvc-017e56d7-c127-434c-83b8-005cf612b8fe", bindCompleted: true, boundByController: true
I0904 20:43:22.274600       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-8582/pvc-8v7d2]: volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" found: phase: Bound, bound to: "azuredisk-8582/pvc-8v7d2 (uid: 017e56d7-c127-434c-83b8-005cf612b8fe)", boundByController: true
I0904 20:43:22.274614       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-8582/pvc-8v7d2]: claim is already correctly bound
I0904 20:43:22.274622       1 pv_controller.go:1012] binding volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" to claim "azuredisk-8582/pvc-8v7d2"
... skipping 19 lines ...
I0904 20:43:22.275357       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-8v7d2" status after binding: phase: Bound, bound to: "pvc-017e56d7-c127-434c-83b8-005cf612b8fe", bindCompleted: true, boundByController: true
I0904 20:43:22.275012       1 pv_controller.go:1231] deleteVolumeOperation [pvc-633fd388-d680-4db1-b72e-87378148b0aa] started
I0904 20:43:22.284434       1 pv_controller.go:1340] isVolumeReleased[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is released
I0904 20:43:22.284463       1 pv_controller.go:1404] doDeleteVolume [pvc-633fd388-d680-4db1-b72e-87378148b0aa]
I0904 20:43:22.288536       1 gc_controller.go:161] GC'ing orphaned
I0904 20:43:22.288557       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:43:22.305513       1 pv_controller.go:1259] deletion of volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:43:22.305536       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: set phase Failed
I0904 20:43:22.305545       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase Failed already set
E0904 20:43:22.305573       1 goroutinemap.go:150] Operation for "delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]" failed. No retries permitted until 2022-09-04 20:43:23.305552819 +0000 UTC m=+1272.393046586 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/virtualMachines/capz-l9y77r-md-0-x4pd8), could not be deleted
I0904 20:43:22.406232       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:43:22.443733       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-md-0-x4pd8 ReadyCondition updated. Updating timestamp.
I0904 20:43:23.214829       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:43:24.903056       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:43:25.598892       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l9y77r-control-plane-kvxcv"
I0904 20:43:27.444511       1 node_lifecycle_controller.go:1047] Node capz-l9y77r-control-plane-kvxcv ReadyCondition updated. Updating timestamp.
... skipping 32 lines ...
I0904 20:43:37.275288       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-4h26g] status: phase Bound already set
I0904 20:43:37.275399       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: claim azuredisk-8582/pvc-4h26g found: phase: Bound, bound to: "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18", bindCompleted: true, boundByController: true
I0904 20:43:37.275484       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: all is bound
I0904 20:43:37.275526       1 pv_controller.go:858] updating PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: set phase Bound
I0904 20:43:37.275535       1 pv_controller.go:861] updating PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: phase Bound already set
I0904 20:43:37.275546       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" with version 3472
I0904 20:43:37.275565       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase: Failed, bound to: "azuredisk-8582/pvc-fwxnk (uid: 633fd388-d680-4db1-b72e-87378148b0aa)", boundByController: true
I0904 20:43:37.275628       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is bound to claim azuredisk-8582/pvc-fwxnk
I0904 20:43:37.275746       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: claim azuredisk-8582/pvc-fwxnk not found
I0904 20:43:37.275758       1 pv_controller.go:1108] reclaimVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: policy is Delete
I0904 20:43:37.275775       1 pv_controller.go:1752] scheduleOperation[delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]]
I0904 20:43:37.275800       1 pv_controller.go:1231] deleteVolumeOperation [pvc-633fd388-d680-4db1-b72e-87378148b0aa] started
I0904 20:43:37.275487       1 pv_controller.go:1038] volume "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18" bound to claim "azuredisk-8582/pvc-4h26g"
... skipping 14 lines ...
I0904 20:43:37.276246       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-8v7d2] status: phase Bound already set
I0904 20:43:37.276257       1 pv_controller.go:1038] volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" bound to claim "azuredisk-8582/pvc-8v7d2"
I0904 20:43:37.276274       1 pv_controller.go:1039] volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-8v7d2 (uid: 017e56d7-c127-434c-83b8-005cf612b8fe)", boundByController: true
I0904 20:43:37.276327       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-8v7d2" status after binding: phase: Bound, bound to: "pvc-017e56d7-c127-434c-83b8-005cf612b8fe", bindCompleted: true, boundByController: true
I0904 20:43:37.280780       1 pv_controller.go:1340] isVolumeReleased[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is released
I0904 20:43:37.280799       1 pv_controller.go:1404] doDeleteVolume [pvc-633fd388-d680-4db1-b72e-87378148b0aa]
I0904 20:43:37.280831       1 pv_controller.go:1259] deletion of volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) since it's in attaching or detaching state
I0904 20:43:37.280842       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: set phase Failed
I0904 20:43:37.280852       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase Failed already set
E0904 20:43:37.280878       1 goroutinemap.go:150] Operation for "delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]" failed. No retries permitted until 2022-09-04 20:43:39.280859545 +0000 UTC m=+1288.368353312 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa) since it's in attaching or detaching state
I0904 20:43:37.407008       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:43:39.192826       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 6 items received
I0904 20:43:40.311666       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 2 items received
I0904 20:43:41.218274       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="147.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44758" resp=200
I0904 20:43:42.289219       1 gc_controller.go:161] GC'ing orphaned
I0904 20:43:42.289260       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 20 lines ...
I0904 20:43:52.274944       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: volume is bound to claim azuredisk-8582/pvc-4h26g
I0904 20:43:52.274957       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: claim azuredisk-8582/pvc-4h26g found: phase: Bound, bound to: "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18", bindCompleted: true, boundByController: true
I0904 20:43:52.274989       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: all is bound
I0904 20:43:52.274996       1 pv_controller.go:858] updating PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: set phase Bound
I0904 20:43:52.275004       1 pv_controller.go:861] updating PersistentVolume[pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18]: phase Bound already set
I0904 20:43:52.275014       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" with version 3472
I0904 20:43:52.275034       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase: Failed, bound to: "azuredisk-8582/pvc-fwxnk (uid: 633fd388-d680-4db1-b72e-87378148b0aa)", boundByController: true
I0904 20:43:52.275054       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is bound to claim azuredisk-8582/pvc-fwxnk
I0904 20:43:52.275074       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: claim azuredisk-8582/pvc-fwxnk not found
I0904 20:43:52.275083       1 pv_controller.go:1108] reclaimVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: policy is Delete
I0904 20:43:52.275097       1 pv_controller.go:1752] scheduleOperation[delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]]
I0904 20:43:52.275123       1 pv_controller.go:1231] deleteVolumeOperation [pvc-633fd388-d680-4db1-b72e-87378148b0aa] started
I0904 20:43:52.275293       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-4h26g" with version 3387
... skipping 37 lines ...
I0904 20:43:57.455347       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-633fd388-d680-4db1-b72e-87378148b0aa
I0904 20:43:57.455373       1 pv_controller.go:1435] volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" deleted
I0904 20:43:57.455384       1 pv_controller.go:1283] deleteVolumeOperation [pvc-633fd388-d680-4db1-b72e-87378148b0aa]: success
I0904 20:43:57.461764       1 pv_protection_controller.go:205] Got event on PV pvc-633fd388-d680-4db1-b72e-87378148b0aa
I0904 20:43:57.461927       1 pv_protection_controller.go:125] Processing PV pvc-633fd388-d680-4db1-b72e-87378148b0aa
I0904 20:43:57.461769       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" with version 3535
I0904 20:43:57.462078       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: phase: Failed, bound to: "azuredisk-8582/pvc-fwxnk (uid: 633fd388-d680-4db1-b72e-87378148b0aa)", boundByController: true
I0904 20:43:57.462123       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: volume is bound to claim azuredisk-8582/pvc-fwxnk
I0904 20:43:57.462143       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: claim azuredisk-8582/pvc-fwxnk not found
I0904 20:43:57.462152       1 pv_controller.go:1108] reclaimVolume[pvc-633fd388-d680-4db1-b72e-87378148b0aa]: policy is Delete
I0904 20:43:57.462184       1 pv_controller.go:1752] scheduleOperation[delete-pvc-633fd388-d680-4db1-b72e-87378148b0aa[1289ccc8-fe5e-4a49-a547-1fd4b5ba6b78]]
I0904 20:43:57.462209       1 pv_controller.go:1231] deleteVolumeOperation [pvc-633fd388-d680-4db1-b72e-87378148b0aa] started
I0904 20:43:57.465421       1 pv_controller.go:1243] Volume "pvc-633fd388-d680-4db1-b72e-87378148b0aa" is already being deleted
... skipping 54 lines ...
I0904 20:44:02.720564       1 pv_controller.go:1108] reclaimVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: policy is Delete
I0904 20:44:02.720573       1 pv_controller.go:1752] scheduleOperation[delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]]
I0904 20:44:02.720582       1 pv_controller.go:1763] operation "delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]" is already running, skipping
I0904 20:44:02.720610       1 pv_controller.go:1231] deleteVolumeOperation [pvc-017e56d7-c127-434c-83b8-005cf612b8fe] started
I0904 20:44:02.722165       1 pv_controller.go:1340] isVolumeReleased[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: volume is released
I0904 20:44:02.722179       1 pv_controller.go:1404] doDeleteVolume [pvc-017e56d7-c127-434c-83b8-005cf612b8fe]
I0904 20:44:02.722207       1 pv_controller.go:1259] deletion of volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe) since it's in attaching or detaching state
I0904 20:44:02.722218       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: set phase Failed
I0904 20:44:02.722227       1 pv_controller.go:858] updating PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: set phase Failed
I0904 20:44:02.724818       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" with version 3547
I0904 20:44:02.725007       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: phase: Failed, bound to: "azuredisk-8582/pvc-8v7d2 (uid: 017e56d7-c127-434c-83b8-005cf612b8fe)", boundByController: true
I0904 20:44:02.725094       1 pv_protection_controller.go:205] Got event on PV pvc-017e56d7-c127-434c-83b8-005cf612b8fe
I0904 20:44:02.725112       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: volume is bound to claim azuredisk-8582/pvc-8v7d2
I0904 20:44:02.725277       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: claim azuredisk-8582/pvc-8v7d2 not found
I0904 20:44:02.725358       1 pv_controller.go:1108] reclaimVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: policy is Delete
I0904 20:44:02.725376       1 pv_controller.go:1752] scheduleOperation[delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]]
I0904 20:44:02.725384       1 pv_controller.go:1763] operation "delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]" is already running, skipping
I0904 20:44:02.725773       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" with version 3547
I0904 20:44:02.725927       1 pv_controller.go:879] volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" entered phase "Failed"
I0904 20:44:02.725940       1 pv_controller.go:901] volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe) since it's in attaching or detaching state
E0904 20:44:02.726013       1 goroutinemap.go:150] Operation for "delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]" failed. No retries permitted until 2022-09-04 20:44:03.225964083 +0000 UTC m=+1312.313457750 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe) since it's in attaching or detaching state
I0904 20:44:02.726105       1 event.go:291] "Event occurred" object="pvc-017e56d7-c127-434c-83b8-005cf612b8fe" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe) since it's in attaching or detaching state"
I0904 20:44:04.626901       1 azure_controller_standard.go:184] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-x4pd8) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe) returned with <nil>
I0904 20:44:04.626935       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe) succeeded
I0904 20:44:04.626945       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe was detached from node:capz-l9y77r-md-0-x4pd8
I0904 20:44:04.626969       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe") on node "capz-l9y77r-md-0-x4pd8" 
I0904 20:44:07.094256       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0904 20:44:07.274843       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:44:07.274901       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" with version 3547
I0904 20:44:07.274937       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: phase: Failed, bound to: "azuredisk-8582/pvc-8v7d2 (uid: 017e56d7-c127-434c-83b8-005cf612b8fe)", boundByController: true
I0904 20:44:07.275003       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: volume is bound to claim azuredisk-8582/pvc-8v7d2
I0904 20:44:07.275023       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: claim azuredisk-8582/pvc-8v7d2 not found
I0904 20:44:07.275067       1 pv_controller.go:1108] reclaimVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: policy is Delete
I0904 20:44:07.275096       1 pv_controller.go:1752] scheduleOperation[delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]]
I0904 20:44:07.275171       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-4h26g" with version 3387
I0904 20:44:07.275261       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8582/pvc-4h26g]: phase: Bound, bound to: "pvc-e0a430dc-7b5b-440d-a7db-615caaa91f18", bindCompleted: true, boundByController: true
... skipping 24 lines ...
I0904 20:44:07.409131       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:44:11.218995       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="57.2µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37724" resp=200
I0904 20:44:12.549199       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-017e56d7-c127-434c-83b8-005cf612b8fe
I0904 20:44:12.549226       1 pv_controller.go:1435] volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" deleted
I0904 20:44:12.549236       1 pv_controller.go:1283] deleteVolumeOperation [pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: success
I0904 20:44:12.561342       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-017e56d7-c127-434c-83b8-005cf612b8fe" with version 3563
I0904 20:44:12.561540       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: phase: Failed, bound to: "azuredisk-8582/pvc-8v7d2 (uid: 017e56d7-c127-434c-83b8-005cf612b8fe)", boundByController: true
I0904 20:44:12.561634       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: volume is bound to claim azuredisk-8582/pvc-8v7d2
I0904 20:44:12.561660       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: claim azuredisk-8582/pvc-8v7d2 not found
I0904 20:44:12.561669       1 pv_controller.go:1108] reclaimVolume[pvc-017e56d7-c127-434c-83b8-005cf612b8fe]: policy is Delete
I0904 20:44:12.561686       1 pv_controller.go:1752] scheduleOperation[delete-pvc-017e56d7-c127-434c-83b8-005cf612b8fe[3e7fd5b0-7120-48b7-83d0-3f1706ece4ea]]
I0904 20:44:12.561774       1 pv_controller.go:1231] deleteVolumeOperation [pvc-017e56d7-c127-434c-83b8-005cf612b8fe] started
I0904 20:44:12.561962       1 pv_protection_controller.go:205] Got event on PV pvc-017e56d7-c127-434c-83b8-005cf612b8fe
... skipping 143 lines ...
I0904 20:44:28.737039       1 publisher.go:186] Finished syncing namespace "azuredisk-8582" (4.743434ms)
I0904 20:44:28.773243       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (1.6µs)
I0904 20:44:28.776379       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8582, estimate: 0, errors: <nil>
I0904 20:44:28.784494       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (168.846735ms)
I0904 20:44:29.528570       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0904 20:44:29.569301       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-xm65m, uid 9c5cf826-29e2-4436-9005-370fcf95751b, event type delete
E0904 20:44:29.582987       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-lzmfw" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0904 20:44:29.620256       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 0079b7a1-654e-4b9a-83a4-341b6221f55a, event type delete
I0904 20:44:29.623715       1 publisher.go:186] Finished syncing namespace "azuredisk-7726" (3.389725ms)
I0904 20:44:29.639897       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0904 20:44:29.640027       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 81733eb5-3106-4fdf-9a8e-9b6bb5c666d3, event type delete
I0904 20:44:29.640059       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (2.1µs)
I0904 20:44:29.666697       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
... skipping 87 lines ...
I0904 20:44:31.415245       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-h6b8b, uid 72d74a73-3004-4e3f-9721-95b817e40d23, event type delete
I0904 20:44:31.457552       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (1.4µs)
I0904 20:44:31.458219       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1387, estimate: 0, errors: <nil>
I0904 20:44:31.474679       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1387" (141.564536ms)
I0904 20:44:32.259584       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4547
I0904 20:44:32.351735       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4547, name default-token-5jr57, uid d7f08579-84ca-46f8-a85b-66f33797cf64, event type delete
E0904 20:44:32.363448       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4547/default: secrets "default-token-vtwx5" is forbidden: unable to create new content in namespace azuredisk-4547 because it is being terminated
I0904 20:44:32.370400       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4547/default), service account deleted, removing tokens
I0904 20:44:32.370577       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (2.2µs)
I0904 20:44:32.370439       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4547, name default, uid 068c03de-f83d-4e08-9a54-bf9a411db8c7, event type delete
I0904 20:44:32.384627       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4547, name kube-root-ca.crt, uid 072c7adb-ce5d-4454-ba33-ae44af7ec654, event type delete
I0904 20:44:32.389351       1 publisher.go:186] Finished syncing namespace "azuredisk-4547" (4.708734ms)
I0904 20:44:32.390590       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (1.4µs)
... skipping 438 lines ...
I0904 20:45:59.904678       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-l9y77r-dynamic-pvc-1687b7e4-8e67-46ec-9e33-f439856fdb59. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-1687b7e4-8e67-46ec-9e33-f439856fdb59" to node "capz-l9y77r-md-0-x4pd8".
I0904 20:45:59.938465       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-1687b7e4-8e67-46ec-9e33-f439856fdb59" lun 0 to node "capz-l9y77r-md-0-x4pd8".
I0904 20:45:59.938499       1 azure_controller_standard.go:93] azureDisk - update(capz-l9y77r): vm(capz-l9y77r-md-0-x4pd8) - attach disk(capz-l9y77r-dynamic-pvc-1687b7e4-8e67-46ec-9e33-f439856fdb59, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l9y77r/providers/Microsoft.Compute/disks/capz-l9y77r-dynamic-pvc-1687b7e4-8e67-46ec-9e33-f439856fdb59) with DiskEncryptionSetID()
I0904 20:46:00.867361       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0904 20:46:00.899944       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-mzhzv, uid a08a0597-f316-4f44-af21-548acd5174c3, event type delete
I0904 20:46:00.907620       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid e3bd3d0d-1dfe-479b-8e9c-58929e0a00ff, event type delete
E0904 20:46:00.911354       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-h99nz" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0904 20:46:00.913972       1 publisher.go:186] Finished syncing namespace "azuredisk-7051" (6.309446ms)
I0904 20:46:00.935570       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-hgztr.1711c31e037f6468, uid 3a33fdcb-0f61-4339-a1df-98aada88b83e, event type delete
I0904 20:46:00.939233       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-hgztr.1711c31f5a5b9131, uid ba65b75f-860f-4ab7-b7f9-d0eb5b43acdc, event type delete
I0904 20:46:00.941852       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-hgztr.1711c320e15efef1, uid 6a7cd15d-c045-4ded-a2d4-9b3d399ecbdd, event type delete
I0904 20:46:00.944094       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-hgztr.1711c320e3abc955, uid 9d13f037-241d-494a-9782-92576e3d8461, event type delete
I0904 20:46:00.947394       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-hgztr.1711c320e95f04a8, uid 547e5aa1-f553-4f80-8370-1a147d3b70e7, event type delete
... skipping 503 lines ...
I0904 20:47:22.829527       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4415" (2.1µs)
I0904 20:47:22.837666       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4415" (124.067506ms)
I0904 20:47:23.265500       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-6720
I0904 20:47:23.334863       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-6720, name default-token-rnmrz, uid d8950758-5529-4dc7-9c4e-19ecc69d89b7, event type delete
I0904 20:47:23.342194       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-6720, name kube-root-ca.crt, uid aba67067-c51b-4f6a-a18d-7bcdfcde0f11, event type delete
I0904 20:47:23.350881       1 publisher.go:186] Finished syncing namespace "azuredisk-6720" (8.537363ms)
E0904 20:47:23.355782       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6720/default: secrets "default-token-68qkj" is forbidden: unable to create new content in namespace azuredisk-6720 because it is being terminated
I0904 20:47:23.370350       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0904 20:47:23.377470       1 tokens_controller.go:252] syncServiceAccount(azuredisk-6720/default), service account deleted, removing tokens
I0904 20:47:23.377806       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-6720, name default, uid 34d5e490-e19f-450e-8abb-fcd87ae45586, event type delete
I0904 20:47:23.377829       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (2.3µs)
I0904 20:47:23.404483       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (2.4µs)
I0904 20:47:23.404693       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-6720, estimate: 0, errors: <nil>
I0904 20:47:23.411888       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-6720" (149.34079ms)
2022/09/04 20:47:24 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1240.161 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-s5ps7, container manager
STEP: Dumping workload cluster default/capz-l9y77r logs
Sep  4 20:48:52.233: INFO: Collecting logs for Linux node capz-l9y77r-control-plane-kvxcv in cluster capz-l9y77r in namespace default

Sep  4 20:49:52.234: INFO: Collecting boot logs for AzureMachine capz-l9y77r-control-plane-kvxcv

Failed to get logs for machine capz-l9y77r-control-plane-cc4mx, cluster default/capz-l9y77r: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 20:49:53.180: INFO: Collecting logs for Linux node capz-l9y77r-md-0-qlvdg in cluster capz-l9y77r in namespace default

Sep  4 20:50:53.181: INFO: Collecting boot logs for AzureMachine capz-l9y77r-md-0-qlvdg

Failed to get logs for machine capz-l9y77r-md-0-6dfb84d5db-q8h27, cluster default/capz-l9y77r: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 20:50:53.556: INFO: Collecting logs for Linux node capz-l9y77r-md-0-x4pd8 in cluster capz-l9y77r in namespace default

Sep  4 20:51:53.558: INFO: Collecting boot logs for AzureMachine capz-l9y77r-md-0-x4pd8

Failed to get logs for machine capz-l9y77r-md-0-6dfb84d5db-zfbs8, cluster default/capz-l9y77r: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-l9y77r kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-node-h8ckx, container calico-node
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-9kdn2
STEP: Collecting events for Pod kube-system/metrics-server-8c95fb79b-xc678
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-9kdn2, container calico-kube-controllers
STEP: Creating log watcher for controller kube-system/calico-node-gcpgj, container calico-node
... skipping 5 lines ...
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-6m54z, container coredns
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-6m54z
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-m7gbd, container coredns
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-m7gbd
STEP: Creating log watcher for controller kube-system/etcd-capz-l9y77r-control-plane-kvxcv, container etcd
STEP: Collecting events for Pod kube-system/etcd-capz-l9y77r-control-plane-kvxcv
STEP: failed to find events of Pod "etcd-capz-l9y77r-control-plane-kvxcv"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-l9y77r-control-plane-kvxcv, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-l9y77r-control-plane-kvxcv
STEP: failed to find events of Pod "kube-apiserver-capz-l9y77r-control-plane-kvxcv"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-l9y77r-control-plane-kvxcv, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-l9y77r-control-plane-kvxcv
STEP: failed to find events of Pod "kube-controller-manager-capz-l9y77r-control-plane-kvxcv"
STEP: Creating log watcher for controller kube-system/kube-proxy-5v7vh, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-5v7vh
STEP: Collecting events for Pod kube-system/calico-node-gcpgj
STEP: Creating log watcher for controller kube-system/kube-proxy-k5cbn, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-l9y77r-control-plane-kvxcv
STEP: failed to find events of Pod "kube-scheduler-capz-l9y77r-control-plane-kvxcv"
STEP: Collecting events for Pod kube-system/kube-proxy-zxrsz
STEP: Fetching kube-system pod logs took 753.450424ms
STEP: Dumping workload cluster default/capz-l9y77r Azure activity log
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-l9y77r-control-plane-kvxcv, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-proxy-k5cbn
STEP: Fetching activity logs took 7.330302269s
... skipping 17 lines ...