This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-06 20:02
Elapsed49m27s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 627 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-hx45zt-kubeconfig; do sleep 1; done"
capz-hx45zt-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-hx45zt-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-hx45zt-control-plane-nhv8k   NotReady   <none>   1s    v1.21.15-rc.0.4+2fef630dd216dd
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-hx45zt-control-plane-nhv8k condition met
node/capz-hx45zt-mp-0000000 condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:22:46.476: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-h7gdv" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  6 20:22:46.541: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 64.78733ms
Sep  6 20:22:48.608: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131924635s
Sep  6 20:22:50.675: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198804183s
Sep  6 20:22:52.741: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.26527604s
Sep  6 20:22:54.809: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.332723362s
Sep  6 20:22:56.875: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.399291393s
Sep  6 20:22:58.941: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.465121638s
Sep  6 20:23:01.012: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.53611231s
Sep  6 20:23:03.083: INFO: Pod "azuredisk-volume-tester-h7gdv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.60683875s
STEP: Saw pod success
Sep  6 20:23:03.083: INFO: Pod "azuredisk-volume-tester-h7gdv" satisfied condition "Succeeded or Failed"
Sep  6 20:23:03.083: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-h7gdv"
Sep  6 20:23:03.161: INFO: Pod azuredisk-volume-tester-h7gdv has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-h7gdv in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:23:03.383: INFO: deleting PVC "azuredisk-8081"/"pvc-h45zl"
Sep  6 20:23:03.383: INFO: Deleting PersistentVolumeClaim "pvc-h45zl"
STEP: waiting for claim's PV "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" to be deleted
Sep  6 20:23:03.451: INFO: Waiting up to 10m0s for PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 to get deleted
Sep  6 20:23:03.555: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 found and phase=Failed (103.890468ms)
Sep  6 20:23:08.625: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 found and phase=Failed (5.174093867s)
Sep  6 20:23:13.693: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 found and phase=Failed (10.242355901s)
Sep  6 20:23:18.764: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 found and phase=Failed (15.313436347s)
Sep  6 20:23:23.835: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 found and phase=Failed (20.384241978s)
Sep  6 20:23:28.905: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 found and phase=Failed (25.454093616s)
Sep  6 20:23:33.974: INFO: PersistentVolume pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 was removed
Sep  6 20:23:33.974: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  6 20:23:34.043: INFO: Claim "azuredisk-8081" in namespace "pvc-h45zl" doesn't exist in the system
Sep  6 20:23:34.043: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-vj2h9
Sep  6 20:23:34.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  6 20:23:51.492: INFO: deleting Pod "azuredisk-495"/"azuredisk-volume-tester-xm7n5"
Sep  6 20:23:51.559: INFO: Error getting logs for pod azuredisk-volume-tester-xm7n5: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-xm7n5)
STEP: Deleting pod azuredisk-volume-tester-xm7n5 in namespace azuredisk-495
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:23:51.761: INFO: deleting PVC "azuredisk-495"/"pvc-njww6"
Sep  6 20:23:51.761: INFO: Deleting PersistentVolumeClaim "pvc-njww6"
STEP: waiting for claim's PV "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" to be deleted
Sep  6 20:23:51.827: INFO: Waiting up to 10m0s for PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 to get deleted
Sep  6 20:23:51.892: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Bound (64.927704ms)
Sep  6 20:23:56.959: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (5.131970648s)
Sep  6 20:24:02.029: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (10.201312663s)
Sep  6 20:24:07.099: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (15.271319323s)
Sep  6 20:24:12.165: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (20.337506015s)
Sep  6 20:24:17.233: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (25.405978544s)
Sep  6 20:24:22.299: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (30.472089133s)
Sep  6 20:24:27.371: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (35.544143443s)
Sep  6 20:24:32.443: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 found and phase=Failed (40.615878587s)
Sep  6 20:24:37.511: INFO: PersistentVolume pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 was removed
Sep  6 20:24:37.511: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-495 to be removed
Sep  6 20:24:37.576: INFO: Claim "azuredisk-495" in namespace "pvc-njww6" doesn't exist in the system
Sep  6 20:24:37.576: INFO: deleting StorageClass azuredisk-495-kubernetes.io-azure-disk-dynamic-sc-m47hf
Sep  6 20:24:37.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-495" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:24:38.882: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-k8kv8" in namespace "azuredisk-2888" to be "Succeeded or Failed"
Sep  6 20:24:38.947: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 65.411192ms
Sep  6 20:24:41.014: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13256166s
Sep  6 20:24:43.082: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200121328s
Sep  6 20:24:45.151: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268859208s
Sep  6 20:24:47.218: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.336460252s
Sep  6 20:24:49.286: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.404389259s
Sep  6 20:24:51.353: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.470951817s
Sep  6 20:24:53.425: INFO: Pod "azuredisk-volume-tester-k8kv8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.543309124s
STEP: Saw pod success
Sep  6 20:24:53.425: INFO: Pod "azuredisk-volume-tester-k8kv8" satisfied condition "Succeeded or Failed"
Sep  6 20:24:53.425: INFO: deleting Pod "azuredisk-2888"/"azuredisk-volume-tester-k8kv8"
Sep  6 20:24:53.493: INFO: Pod azuredisk-volume-tester-k8kv8 has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-k8kv8 in namespace azuredisk-2888
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:24:53.716: INFO: deleting PVC "azuredisk-2888"/"pvc-75255"
Sep  6 20:24:53.716: INFO: Deleting PersistentVolumeClaim "pvc-75255"
STEP: waiting for claim's PV "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" to be deleted
Sep  6 20:24:53.784: INFO: Waiting up to 10m0s for PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 to get deleted
Sep  6 20:24:53.849: INFO: PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 found and phase=Failed (64.552872ms)
Sep  6 20:24:58.915: INFO: PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 found and phase=Failed (5.131297537s)
Sep  6 20:25:03.983: INFO: PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 found and phase=Failed (10.199336527s)
Sep  6 20:25:09.051: INFO: PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 found and phase=Failed (15.266771037s)
Sep  6 20:25:14.121: INFO: PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 found and phase=Failed (20.336631341s)
Sep  6 20:25:19.188: INFO: PersistentVolume pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 was removed
Sep  6 20:25:19.188: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2888 to be removed
Sep  6 20:25:19.254: INFO: Claim "azuredisk-2888" in namespace "pvc-75255" doesn't exist in the system
Sep  6 20:25:19.254: INFO: deleting StorageClass azuredisk-2888-kubernetes.io-azure-disk-dynamic-sc-gntmr
Sep  6 20:25:19.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2888" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  6 20:25:20.604: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qp2bq" in namespace "azuredisk-5429" to be "Error status code"
Sep  6 20:25:20.669: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 64.853175ms
Sep  6 20:25:22.738: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134418971s
Sep  6 20:25:24.806: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202130705s
Sep  6 20:25:26.872: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.268514555s
Sep  6 20:25:28.939: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.334683242s
Sep  6 20:25:31.005: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.401462077s
Sep  6 20:25:33.072: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Pending", Reason="", readiness=false. Elapsed: 12.468091491s
Sep  6 20:25:35.142: INFO: Pod "azuredisk-volume-tester-qp2bq": Phase="Failed", Reason="", readiness=false. Elapsed: 14.538074728s
STEP: Saw pod failure
Sep  6 20:25:35.142: INFO: Pod "azuredisk-volume-tester-qp2bq" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  6 20:25:35.209: INFO: deleting Pod "azuredisk-5429"/"azuredisk-volume-tester-qp2bq"
Sep  6 20:25:35.279: INFO: Pod azuredisk-volume-tester-qp2bq has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-qp2bq in namespace azuredisk-5429
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:25:35.503: INFO: deleting PVC "azuredisk-5429"/"pvc-xr629"
Sep  6 20:25:35.503: INFO: Deleting PersistentVolumeClaim "pvc-xr629"
STEP: waiting for claim's PV "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" to be deleted
Sep  6 20:25:35.569: INFO: Waiting up to 10m0s for PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d to get deleted
Sep  6 20:25:35.635: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (65.392952ms)
Sep  6 20:25:40.705: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (5.135538768s)
Sep  6 20:25:45.775: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (10.205832965s)
Sep  6 20:25:50.842: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (15.272653456s)
Sep  6 20:25:55.911: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (20.341271573s)
Sep  6 20:26:00.981: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (25.411474673s)
Sep  6 20:26:06.049: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (30.479267911s)
Sep  6 20:26:11.118: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (35.548889128s)
Sep  6 20:26:16.185: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d found and phase=Failed (40.615388964s)
Sep  6 20:26:21.254: INFO: PersistentVolume pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d was removed
Sep  6 20:26:21.254: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5429 to be removed
Sep  6 20:26:21.319: INFO: Claim "azuredisk-5429" in namespace "pvc-xr629" doesn't exist in the system
Sep  6 20:26:21.319: INFO: deleting StorageClass azuredisk-5429-kubernetes.io-azure-disk-dynamic-sc-jw5xv
Sep  6 20:26:21.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5429" for this suite.
... skipping 55 lines ...
Sep  6 20:27:23.094: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Bound (15.267890303s)
Sep  6 20:27:28.160: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Bound (20.33389049s)
Sep  6 20:27:33.229: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Bound (25.402332005s)
Sep  6 20:27:38.294: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Bound (30.467941823s)
Sep  6 20:27:43.364: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Bound (35.537279143s)
Sep  6 20:27:48.430: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Bound (40.603710119s)
Sep  6 20:27:53.495: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Failed (45.669119796s)
Sep  6 20:27:58.565: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 found and phase=Failed (50.738563239s)
Sep  6 20:28:03.634: INFO: PersistentVolume pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 was removed
Sep  6 20:28:03.634: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-3090 to be removed
Sep  6 20:28:03.699: INFO: Claim "azuredisk-3090" in namespace "pvc-lrr98" doesn't exist in the system
Sep  6 20:28:03.699: INFO: deleting StorageClass azuredisk-3090-kubernetes.io-azure-disk-dynamic-sc-h5v9q
Sep  6 20:28:03.765: INFO: deleting Pod "azuredisk-3090"/"azuredisk-volume-tester-l84xw"
Sep  6 20:28:03.833: INFO: Pod azuredisk-volume-tester-l84xw has the following logs: 
... skipping 9 lines ...
Sep  6 20:28:14.301: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Bound (10.199759987s)
Sep  6 20:28:19.373: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Bound (15.271368572s)
Sep  6 20:28:24.442: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Bound (20.340833731s)
Sep  6 20:28:29.512: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Bound (25.410046559s)
Sep  6 20:28:34.581: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Bound (30.479799824s)
Sep  6 20:28:39.647: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Bound (35.545763245s)
Sep  6 20:28:44.713: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Failed (40.611538492s)
Sep  6 20:28:49.779: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Failed (45.677174919s)
Sep  6 20:28:54.847: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Failed (50.745279551s)
Sep  6 20:28:59.913: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 found and phase=Failed (55.811621204s)
Sep  6 20:29:04.979: INFO: PersistentVolume pvc-e618297b-caca-412c-a79b-8d6f019b3664 was removed
Sep  6 20:29:04.979: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-3090 to be removed
Sep  6 20:29:05.044: INFO: Claim "azuredisk-3090" in namespace "pvc-ck5cw" doesn't exist in the system
Sep  6 20:29:05.044: INFO: deleting StorageClass azuredisk-3090-kubernetes.io-azure-disk-dynamic-sc-5w75k
Sep  6 20:29:05.111: INFO: deleting Pod "azuredisk-3090"/"azuredisk-volume-tester-lpzvn"
Sep  6 20:29:05.196: INFO: Pod azuredisk-volume-tester-lpzvn has the following logs: 
... skipping 10 lines ...
Sep  6 20:29:20.738: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Bound (15.271718582s)
Sep  6 20:29:25.805: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Bound (20.338702826s)
Sep  6 20:29:30.874: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Bound (25.407901491s)
Sep  6 20:29:35.943: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Bound (30.476733766s)
Sep  6 20:29:41.008: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Bound (35.542400036s)
Sep  6 20:29:46.078: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Bound (40.611956378s)
Sep  6 20:29:51.147: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Failed (45.681223282s)
Sep  6 20:29:56.216: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Failed (50.750412819s)
Sep  6 20:30:01.283: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 found and phase=Failed (55.817157253s)
Sep  6 20:30:06.351: INFO: PersistentVolume pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 was removed
Sep  6 20:30:06.351: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-3090 to be removed
Sep  6 20:30:06.416: INFO: Claim "azuredisk-3090" in namespace "pvc-bqdjc" doesn't exist in the system
Sep  6 20:30:06.416: INFO: deleting StorageClass azuredisk-3090-kubernetes.io-azure-disk-dynamic-sc-f6vlj
Sep  6 20:30:06.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-3090" for this suite.
... skipping 58 lines ...
Sep  6 20:31:39.688: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Bound (5.13435038s)
Sep  6 20:31:44.758: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Bound (10.204062203s)
Sep  6 20:31:49.827: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Bound (15.273427771s)
Sep  6 20:31:54.896: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Bound (20.342497581s)
Sep  6 20:31:59.963: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Bound (25.408879213s)
Sep  6 20:32:05.031: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Bound (30.477765354s)
Sep  6 20:32:10.097: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Failed (35.543428764s)
Sep  6 20:32:15.167: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Failed (40.613611782s)
Sep  6 20:32:20.233: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Failed (45.679490635s)
Sep  6 20:32:25.302: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Failed (50.748701467s)
Sep  6 20:32:30.370: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b found and phase=Failed (55.816500071s)
Sep  6 20:32:35.438: INFO: PersistentVolume pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b was removed
Sep  6 20:32:35.438: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-6159 to be removed
Sep  6 20:32:35.503: INFO: Claim "azuredisk-6159" in namespace "pvc-bfhx6" doesn't exist in the system
Sep  6 20:32:35.503: INFO: deleting StorageClass azuredisk-6159-kubernetes.io-azure-disk-dynamic-sc-4v5nn
Sep  6 20:32:35.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-6159" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:32:55.618: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qzdwc" in namespace "azuredisk-9241" to be "Succeeded or Failed"
Sep  6 20:32:55.684: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 66.282457ms
Sep  6 20:32:57.749: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131659006s
Sep  6 20:32:59.819: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201438261s
Sep  6 20:33:01.889: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271027561s
Sep  6 20:33:03.963: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.345715254s
Sep  6 20:33:06.034: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.416294217s
... skipping 9 lines ...
Sep  6 20:33:26.735: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 31.117663403s
Sep  6 20:33:28.806: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 33.187913157s
Sep  6 20:33:30.875: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 35.257338016s
Sep  6 20:33:32.944: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Pending", Reason="", readiness=false. Elapsed: 37.326615561s
Sep  6 20:33:35.015: INFO: Pod "azuredisk-volume-tester-qzdwc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.397237934s
STEP: Saw pod success
Sep  6 20:33:35.015: INFO: Pod "azuredisk-volume-tester-qzdwc" satisfied condition "Succeeded or Failed"
Sep  6 20:33:35.015: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-qzdwc"
Sep  6 20:33:35.090: INFO: Pod azuredisk-volume-tester-qzdwc has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-qzdwc in namespace azuredisk-9241
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:33:35.302: INFO: deleting PVC "azuredisk-9241"/"pvc-xhwjr"
Sep  6 20:33:35.302: INFO: Deleting PersistentVolumeClaim "pvc-xhwjr"
STEP: waiting for claim's PV "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" to be deleted
Sep  6 20:33:35.369: INFO: Waiting up to 10m0s for PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 to get deleted
Sep  6 20:33:35.438: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 found and phase=Failed (68.823213ms)
Sep  6 20:33:40.507: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 found and phase=Failed (5.137799072s)
Sep  6 20:33:45.577: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 found and phase=Failed (10.207448871s)
Sep  6 20:33:50.646: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 found and phase=Failed (15.276291s)
Sep  6 20:33:55.715: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 found and phase=Failed (20.345220662s)
Sep  6 20:34:00.780: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 found and phase=Failed (25.410363485s)
Sep  6 20:34:05.847: INFO: PersistentVolume pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2 was removed
Sep  6 20:34:05.847: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  6 20:34:05.911: INFO: Claim "azuredisk-9241" in namespace "pvc-xhwjr" doesn't exist in the system
Sep  6 20:34:05.911: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-kmmdz
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:34:06.108: INFO: deleting PVC "azuredisk-9241"/"pvc-g9bdz"
Sep  6 20:34:06.108: INFO: Deleting PersistentVolumeClaim "pvc-g9bdz"
STEP: waiting for claim's PV "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" to be deleted
Sep  6 20:34:06.175: INFO: Waiting up to 10m0s for PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 to get deleted
Sep  6 20:34:06.239: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 found and phase=Failed (64.45844ms)
Sep  6 20:34:11.308: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 found and phase=Failed (5.133656638s)
Sep  6 20:34:16.374: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 found and phase=Failed (10.199572996s)
Sep  6 20:34:21.442: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 found and phase=Failed (15.267654334s)
Sep  6 20:34:26.508: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 found and phase=Failed (20.332912227s)
Sep  6 20:34:31.577: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 found and phase=Failed (25.402601092s)
Sep  6 20:34:36.643: INFO: PersistentVolume pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 was removed
Sep  6 20:34:36.643: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  6 20:34:36.708: INFO: Claim "azuredisk-9241" in namespace "pvc-g9bdz" doesn't exist in the system
Sep  6 20:34:36.708: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-6ntrz
STEP: validating provisioned PV
STEP: checking the PV
... skipping 39 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:34:48.578: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-48z94" in namespace "azuredisk-9336" to be "Succeeded or Failed"
Sep  6 20:34:48.643: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 64.749642ms
Sep  6 20:34:50.710: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1312336s
Sep  6 20:34:52.781: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.202198643s
Sep  6 20:34:54.850: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.271855994s
Sep  6 20:34:56.920: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.34131848s
Sep  6 20:34:58.989: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 10.410557622s
Sep  6 20:35:01.058: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 12.47995617s
Sep  6 20:35:03.129: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 14.550552746s
Sep  6 20:35:05.207: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 16.628195844s
Sep  6 20:35:07.277: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 18.698364504s
Sep  6 20:35:09.348: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Pending", Reason="", readiness=false. Elapsed: 20.769770412s
Sep  6 20:35:11.418: INFO: Pod "azuredisk-volume-tester-48z94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.83936566s
STEP: Saw pod success
Sep  6 20:35:11.418: INFO: Pod "azuredisk-volume-tester-48z94" satisfied condition "Succeeded or Failed"
Sep  6 20:35:11.418: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-48z94"
Sep  6 20:35:11.494: INFO: Pod azuredisk-volume-tester-48z94 has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.049685 seconds, 2.0GB/s
hello world

STEP: Deleting pod azuredisk-volume-tester-48z94 in namespace azuredisk-9336
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:35:11.701: INFO: deleting PVC "azuredisk-9336"/"pvc-tdg69"
Sep  6 20:35:11.701: INFO: Deleting PersistentVolumeClaim "pvc-tdg69"
STEP: waiting for claim's PV "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" to be deleted
Sep  6 20:35:11.768: INFO: Waiting up to 10m0s for PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 to get deleted
Sep  6 20:35:11.834: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (66.573925ms)
Sep  6 20:35:16.904: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (5.136285799s)
Sep  6 20:35:21.973: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (10.204902624s)
Sep  6 20:35:27.044: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (15.276808157s)
Sep  6 20:35:32.114: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (20.346005088s)
Sep  6 20:35:37.184: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (25.416060901s)
Sep  6 20:35:42.250: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (30.481992351s)
Sep  6 20:35:47.315: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 found and phase=Failed (35.547797662s)
Sep  6 20:35:52.381: INFO: PersistentVolume pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 was removed
Sep  6 20:35:52.381: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  6 20:35:52.446: INFO: Claim "azuredisk-9336" in namespace "pvc-tdg69" doesn't exist in the system
Sep  6 20:35:52.446: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-jt9bl
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  6 20:36:06.389: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-lmjtr" in namespace "azuredisk-8591" to be "Succeeded or Failed"
Sep  6 20:36:06.454: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 64.919872ms
Sep  6 20:36:08.519: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130130904s
Sep  6 20:36:10.589: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200082686s
Sep  6 20:36:12.658: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.269009866s
Sep  6 20:36:14.728: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339379664s
Sep  6 20:36:16.797: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 10.408657426s
... skipping 8 lines ...
Sep  6 20:36:35.433: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 29.044022861s
Sep  6 20:36:37.503: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 31.113998429s
Sep  6 20:36:39.572: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 33.183527798s
Sep  6 20:36:41.642: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Pending", Reason="", readiness=false. Elapsed: 35.253054244s
Sep  6 20:36:43.711: INFO: Pod "azuredisk-volume-tester-lmjtr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.32251947s
STEP: Saw pod success
Sep  6 20:36:43.711: INFO: Pod "azuredisk-volume-tester-lmjtr" satisfied condition "Succeeded or Failed"
Sep  6 20:36:43.711: INFO: deleting Pod "azuredisk-8591"/"azuredisk-volume-tester-lmjtr"
Sep  6 20:36:43.786: INFO: Pod azuredisk-volume-tester-lmjtr has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-lmjtr in namespace azuredisk-8591
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:36:43.993: INFO: deleting PVC "azuredisk-8591"/"pvc-6v5z9"
Sep  6 20:36:43.993: INFO: Deleting PersistentVolumeClaim "pvc-6v5z9"
STEP: waiting for claim's PV "pvc-84da95c0-4a27-479a-9043-84375f142a41" to be deleted
Sep  6 20:36:44.059: INFO: Waiting up to 10m0s for PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 to get deleted
Sep  6 20:36:44.124: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (64.529522ms)
Sep  6 20:36:49.193: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (5.133877292s)
Sep  6 20:36:54.266: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (10.206909991s)
Sep  6 20:36:59.332: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (15.272807265s)
Sep  6 20:37:04.398: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (20.338259657s)
Sep  6 20:37:09.469: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (25.409124547s)
Sep  6 20:37:14.535: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 found and phase=Failed (30.475614536s)
Sep  6 20:37:19.604: INFO: PersistentVolume pvc-84da95c0-4a27-479a-9043-84375f142a41 was removed
Sep  6 20:37:19.604: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8591 to be removed
Sep  6 20:37:19.669: INFO: Claim "azuredisk-8591" in namespace "pvc-6v5z9" doesn't exist in the system
Sep  6 20:37:19.669: INFO: deleting StorageClass azuredisk-8591-kubernetes.io-azure-disk-dynamic-sc-269k8
STEP: validating provisioned PV
STEP: checking the PV
Sep  6 20:37:19.866: INFO: deleting PVC "azuredisk-8591"/"pvc-xfpcn"
Sep  6 20:37:19.866: INFO: Deleting PersistentVolumeClaim "pvc-xfpcn"
STEP: waiting for claim's PV "pvc-beaa455d-79a3-4471-9257-62806b6802c6" to be deleted
Sep  6 20:37:19.933: INFO: Waiting up to 10m0s for PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 to get deleted
Sep  6 20:37:19.997: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 found and phase=Failed (64.850337ms)
Sep  6 20:37:25.067: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 found and phase=Failed (5.134804241s)
Sep  6 20:37:30.134: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 found and phase=Failed (10.201854613s)
Sep  6 20:37:35.200: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 found and phase=Failed (15.266902446s)
Sep  6 20:37:40.269: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 found and phase=Failed (20.336111948s)
Sep  6 20:37:45.334: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 found and phase=Failed (25.401336545s)
Sep  6 20:37:50.399: INFO: PersistentVolume pvc-beaa455d-79a3-4471-9257-62806b6802c6 was removed
Sep  6 20:37:50.399: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8591 to be removed
Sep  6 20:37:50.464: INFO: Claim "azuredisk-8591" in namespace "pvc-xfpcn" doesn't exist in the system
Sep  6 20:37:50.464: INFO: deleting StorageClass azuredisk-8591-kubernetes.io-azure-disk-dynamic-sc-2dkhs
STEP: validating provisioned PV
STEP: checking the PV
... skipping 389 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  6 20:40:56.648: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.617 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 248 lines ...
I0906 20:18:18.604791       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I0906 20:18:18.605332       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662495496" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662495496" (2022-09-06 19:18:15 +0000 UTC to 2023-09-06 19:18:15 +0000 UTC (now=2022-09-06 20:18:18.60532218 +0000 UTC))
I0906 20:18:18.605571       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662495498" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662495497" (2022-09-06 19:18:16 +0000 UTC to 2023-09-06 19:18:16 +0000 UTC (now=2022-09-06 20:18:18.605562111 +0000 UTC))
I0906 20:18:18.605603       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0906 20:18:18.605724       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0906 20:18:18.605972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0906 20:18:20.392663       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0906 20:18:20.392697       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0906 20:18:22.518653       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0906 20:18:22.519111       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-hx45zt-control-plane-nhv8k_c4232e55-d005-46ee-b5ba-114ddcedf07b became leader"
I0906 20:18:22.644137       1 request.go:600] Waited for 96.570796ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
I0906 20:18:22.693801       1 request.go:600] Waited for 146.224998ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/storage.k8s.io/v1?timeout=32s
I0906 20:18:22.743610       1 request.go:600] Waited for 196.016817ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/storage.k8s.io/v1beta1?timeout=32s
I0906 20:18:22.793235       1 request.go:600] Waited for 245.629515ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/admissionregistration.k8s.io/v1?timeout=32s
... skipping 39 lines ...
I0906 20:18:23.100707       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0906 20:18:23.100750       1 reflector.go:219] Starting reflector *v1.ServiceAccount (22h5m34.977648582s) from k8s.io/client-go/informers/factory.go:134
I0906 20:18:23.100759       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0906 20:18:23.100891       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0906 20:18:23.100960       1 reflector.go:219] Starting reflector *v1.Secret (22h5m34.977648582s) from k8s.io/client-go/informers/factory.go:134
I0906 20:18:23.100972       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0906 20:18:23.158811       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0906 20:18:23.158840       1 controllermanager.go:559] Starting "garbagecollector"
I0906 20:18:23.193798       1 garbagecollector.go:142] Starting garbage collector controller
I0906 20:18:23.193821       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0906 20:18:23.193837       1 graph_builder.go:273] garbage controller monitor not synced: no monitors
I0906 20:18:23.193872       1 graph_builder.go:289] GraphBuilder running
I0906 20:18:23.193975       1 controllermanager.go:574] Started "garbagecollector"
... skipping 43 lines ...
I0906 20:18:23.422126       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0906 20:18:23.422264       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0906 20:18:23.422390       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0906 20:18:23.422530       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0906 20:18:23.422633       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0906 20:18:23.422771       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0906 20:18:23.422925       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 20:18:23.423055       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0906 20:18:23.423248       1 controllermanager.go:574] Started "persistentvolume-binder"
I0906 20:18:23.423373       1 controllermanager.go:559] Starting "pv-protection"
I0906 20:18:23.423676       1 pv_controller_base.go:308] Starting persistent volume controller
I0906 20:18:23.423819       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0906 20:18:23.488202       1 controllermanager.go:574] Started "pv-protection"
... skipping 31 lines ...
I0906 20:18:24.110022       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0906 20:18:24.110039       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0906 20:18:24.110063       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0906 20:18:24.110113       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0906 20:18:24.110160       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0906 20:18:24.110175       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0906 20:18:24.110208       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0906 20:18:24.110253       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0906 20:18:24.110516       1 controllermanager.go:574] Started "attachdetach"
I0906 20:18:24.110535       1 controllermanager.go:559] Starting "persistentvolume-expander"
I0906 20:18:24.110619       1 attach_detach_controller.go:328] Starting attach detach controller
I0906 20:18:24.110634       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0906 20:18:24.110970       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-control-plane-nhv8k"
W0906 20:18:24.111023       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-hx45zt-control-plane-nhv8k" does not exist
I0906 20:18:24.114990       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-hx45zt-control-plane-nhv8k"
I0906 20:18:24.115019       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-hx45zt-control-plane-nhv8k"
I0906 20:18:24.266798       1 plugins.go:639] Loaded volume plugin "kubernetes.io/cinder"
I0906 20:18:24.266826       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-disk"
I0906 20:18:24.266838       1 plugins.go:639] Loaded volume plugin "kubernetes.io/vsphere-volume"
I0906 20:18:24.266847       1 plugins.go:639] Loaded volume plugin "kubernetes.io/aws-ebs"
... skipping 626 lines ...
I0906 20:18:28.371383       1 replica_set.go:559] "Too few replicas" replicaSet="kube-system/calico-kube-controllers-969cf87c4" need=1 creating=1
I0906 20:18:28.372036       1 event.go:291] "Event occurred" object="kube-system/calico-kube-controllers" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set calico-kube-controllers-969cf87c4 to 1"
I0906 20:18:28.372524       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0906 20:18:28.374994       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0906 20:18:28.376832       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-06 20:18:28.371734151 +0000 UTC m=+12.896373872 - now: 2022-09-06 20:18:28.376827468 +0000 UTC m=+12.901467189]
I0906 20:18:28.380530       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="719.473588ms"
I0906 20:18:28.380562       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:18:28.380687       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-06 20:18:28.380667581 +0000 UTC m=+12.905307202"
I0906 20:18:28.381333       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-06 20:18:28 +0000 UTC - now: 2022-09-06 20:18:28.381327983 +0000 UTC m=+12.905967704]
I0906 20:18:28.382154       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="720.150986ms"
I0906 20:18:28.382187       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:18:28.382236       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 20:18:28.382218586 +0000 UTC m=+12.906858307"
I0906 20:18:28.382658       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-06 20:18:28 +0000 UTC - now: 2022-09-06 20:18:28.382653388 +0000 UTC m=+12.907293109]
I0906 20:18:28.385789       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0906 20:18:28.386201       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="3.968014ms"
I0906 20:18:28.386617       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0906 20:18:28.386682       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-06 20:18:28.386648801 +0000 UTC m=+12.911288422"
... skipping 412 lines ...
I0906 20:18:59.435930       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 20:18:59.435937       1 controller.go:790] Finished updateLoadBalancerHosts
I0906 20:18:59.435942       1 controller.go:731] It took 1.4301e-05 seconds to finish nodeSyncInternal
I0906 20:18:59.452395       1 controller_utils.go:221] Made sure that Node capz-hx45zt-control-plane-nhv8k has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0906 20:18:59.452666       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-control-plane-nhv8k"
I0906 20:19:01.628307       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-zj5cwf" (8.8µs)
I0906 20:19:02.599172       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-hx45zt-control-plane-nhv8k transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 20:18:39 +0000 UTC,LastTransitionTime:2022-09-06 20:18:04 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 20:18:59 +0000 UTC,LastTransitionTime:2022-09-06 20:18:59 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 20:19:02.599254       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-control-plane-nhv8k ReadyCondition updated. Updating timestamp.
I0906 20:19:02.599277       1 node_lifecycle_controller.go:893] Node capz-hx45zt-control-plane-nhv8k is healthy again, removing all taints
I0906 20:19:02.599377       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0906 20:19:03.440820       1 disruption.go:427] updatePod called on pod "calico-node-km77p"
I0906 20:19:03.441558       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-km77p, PodDisruptionBudget controller will avoid syncing.
I0906 20:19:03.441574       1 disruption.go:430] No matching pdb for pod "calico-node-km77p"
... skipping 255 lines ...
I0906 20:20:05.280473       1 certificate_controller.go:173] Finished syncing certificate request "csr-9vp24" (900ns)
I0906 20:20:05.280723       1 certificate_controller.go:173] Finished syncing certificate request "csr-9vp24" (6.764129ms)
I0906 20:20:05.280757       1 certificate_controller.go:173] Finished syncing certificate request "csr-9vp24" (1.2µs)
I0906 20:20:06.345326       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-hx45zt-mp-0000001}
I0906 20:20:06.345350       1 taint_manager.go:440] "Updating known taints on node" node="capz-hx45zt-mp-0000001" taints=[]
I0906 20:20:06.345425       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
W0906 20:20:06.345440       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-hx45zt-mp-0000001" does not exist
I0906 20:20:06.346881       1 controller.go:693] Ignoring node capz-hx45zt-mp-0000001 with Ready condition status False
I0906 20:20:06.346919       1 controller.go:272] Triggering nodeSync
I0906 20:20:06.346928       1 controller.go:291] nodeSync has been triggered
I0906 20:20:06.346935       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0906 20:20:06.346943       1 controller.go:790] Finished updateLoadBalancerHosts
I0906 20:20:06.346949       1 controller.go:731] It took 1.56e-05 seconds to finish nodeSyncInternal
... skipping 164 lines ...
I0906 20:20:10.516372       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be08be9ec718bd, ext:115041004198, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:10.509655       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be08be1bbfab73, ext:112990185720, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:10.507687       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000000"
I0906 20:20:10.516534       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-hx45zt-mp-0000000], creating 1
I0906 20:20:10.516765       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be08be9ecd278b, ext:115041401204, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:10.516840       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-hx45zt-mp-0000000], creating 1
W0906 20:20:10.517127       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-hx45zt-mp-0000000" does not exist
I0906 20:20:10.536328       1 controller_utils.go:591] Controller calico-node created pod calico-node-dcvcp
I0906 20:20:10.536375       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0906 20:20:10.536646       1 controller_utils.go:195] Controller still waiting on expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be08be9ec718bd, ext:115041004198, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:10.536801       1 daemon_controller.go:1103] Updating daemon set status
I0906 20:20:10.537544       1 event.go:291] "Event occurred" object="kube-system/calico-node" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-dcvcp"
I0906 20:20:10.538934       1 disruption.go:415] addPod called on pod "calico-node-dcvcp"
... skipping 351 lines ...
I0906 20:20:37.542804       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be08c560552815, ext:142067091654, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:37.542925       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be08c5605c54c9, ext:142067561550, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:37.542949       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0906 20:20:37.542989       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0906 20:20:37.543046       1 daemon_controller.go:1103] Updating daemon set status
I0906 20:20:37.543137       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (1.818784ms)
I0906 20:20:37.616206       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-hx45zt-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 20:20:16 +0000 UTC,LastTransitionTime:2022-09-06 20:20:06 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 20:20:36 +0000 UTC,LastTransitionTime:2022-09-06 20:20:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 20:20:37.616292       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000001 ReadyCondition updated. Updating timestamp.
I0906 20:20:37.623962       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-hx45zt-mp-0000001}
I0906 20:20:37.625293       1 taint_manager.go:440] "Updating known taints on node" node="capz-hx45zt-mp-0000001" taints=[]
I0906 20:20:37.624968       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
I0906 20:20:37.625717       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-hx45zt-mp-0000001"
I0906 20:20:37.627948       1 node_lifecycle_controller.go:893] Node capz-hx45zt-mp-0000001 is healthy again, removing all taints
... skipping 26 lines ...
I0906 20:20:41.802145       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be08c66fce71ce, ext:146326698423, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:41.802217       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be08c66fd0cfa7, ext:146326853420, loc:(*time.Location)(0x731ea80)}}
I0906 20:20:41.802229       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0906 20:20:41.802270       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0906 20:20:41.802294       1 daemon_controller.go:1103] Updating daemon set status
I0906 20:20:41.802352       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (1.406678ms)
I0906 20:20:42.628756       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-hx45zt-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-06 20:20:20 +0000 UTC,LastTransitionTime:2022-09-06 20:20:10 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-06 20:20:40 +0000 UTC,LastTransitionTime:2022-09-06 20:20:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0906 20:20:42.628804       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000000 ReadyCondition updated. Updating timestamp.
I0906 20:20:42.639151       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:20:42.639524       1 node_lifecycle_controller.go:893] Node capz-hx45zt-mp-0000000 is healthy again, removing all taints
I0906 20:20:42.640328       1 node_lifecycle_controller.go:1214] Controller detected that zone westus3::0 is now in state Normal.
I0906 20:20:42.639645       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-hx45zt-mp-0000000}
I0906 20:20:42.640445       1 taint_manager.go:440] "Updating known taints on node" node="capz-hx45zt-mp-0000000" taints=[]
... skipping 330 lines ...
I0906 20:23:03.453150       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: claim azuredisk-8081/pvc-h45zl not found
I0906 20:23:03.453238       1 pv_controller.go:1108] reclaimVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: policy is Delete
I0906 20:23:03.453347       1 pv_controller.go:1753] scheduleOperation[delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]]
I0906 20:23:03.453448       1 pv_controller.go:1764] operation "delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]" is already running, skipping
I0906 20:23:03.454658       1 pv_controller.go:1341] isVolumeReleased[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is released
I0906 20:23:03.454674       1 pv_controller.go:1405] doDeleteVolume [pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]
I0906 20:23:03.473809       1 pv_controller.go:1260] deletion of volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:23:03.473832       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: set phase Failed
I0906 20:23:03.473841       1 pv_controller.go:858] updating PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: set phase Failed
I0906 20:23:03.476829       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" with version 1185
I0906 20:23:03.476954       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: phase: Failed, bound to: "azuredisk-8081/pvc-h45zl (uid: 95392d8c-d1a7-4c9b-a955-e5df803b4111)", boundByController: true
I0906 20:23:03.477058       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" with version 1185
I0906 20:23:03.477083       1 pv_controller.go:879] volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" entered phase "Failed"
I0906 20:23:03.477092       1 pv_controller.go:901] volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:23:03.477154       1 pv_protection_controller.go:205] Got event on PV pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111
E0906 20:23:03.477242       1 goroutinemap.go:150] Operation for "delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]" failed. No retries permitted until 2022-09-06 20:23:03.97721257 +0000 UTC m=+288.501852291 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:23:03.477066       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is bound to claim azuredisk-8081/pvc-h45zl
I0906 20:23:03.477325       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: claim azuredisk-8081/pvc-h45zl not found
I0906 20:23:03.477339       1 pv_controller.go:1108] reclaimVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: policy is Delete
I0906 20:23:03.477390       1 pv_controller.go:1753] scheduleOperation[delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]]
I0906 20:23:03.477406       1 pv_controller.go:1766] operation "delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]" postponed due to exponential backoff
I0906 20:23:03.477482       1 event.go:291] "Event occurred" object="pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
... skipping 12 lines ...
I0906 20:23:07.625531       1 gc_controller.go:161] GC'ing orphaned
I0906 20:23:07.625562       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:23:07.667161       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000001 ReadyCondition updated. Updating timestamp.
I0906 20:23:12.646303       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:23:12.736584       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:23:12.736644       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" with version 1185
I0906 20:23:12.736740       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: phase: Failed, bound to: "azuredisk-8081/pvc-h45zl (uid: 95392d8c-d1a7-4c9b-a955-e5df803b4111)", boundByController: true
I0906 20:23:12.736823       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is bound to claim azuredisk-8081/pvc-h45zl
I0906 20:23:12.736900       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: claim azuredisk-8081/pvc-h45zl not found
I0906 20:23:12.736917       1 pv_controller.go:1108] reclaimVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: policy is Delete
I0906 20:23:12.736933       1 pv_controller.go:1753] scheduleOperation[delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]]
I0906 20:23:12.736985       1 pv_controller.go:1232] deleteVolumeOperation [pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111] started
I0906 20:23:12.740969       1 pv_controller.go:1341] isVolumeReleased[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is released
I0906 20:23:12.740989       1 pv_controller.go:1405] doDeleteVolume [pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]
I0906 20:23:12.741024       1 pv_controller.go:1260] deletion of volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) since it's in attaching or detaching state
I0906 20:23:12.741037       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: set phase Failed
I0906 20:23:12.741049       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: phase Failed already set
E0906 20:23:12.741104       1 goroutinemap.go:150] Operation for "delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]" failed. No retries permitted until 2022-09-06 20:23:13.741058373 +0000 UTC m=+298.265697994 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) since it's in attaching or detaching state"
I0906 20:23:14.410567       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="74.899µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43162" resp=200
I0906 20:23:22.097755       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) returned with <nil>
I0906 20:23:22.097817       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111) succeeded
I0906 20:23:22.097827       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111 was detached from node:capz-hx45zt-mp-0000001
I0906 20:23:22.097851       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111") on node "capz-hx45zt-mp-0000001" 
I0906 20:23:24.410329       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="63.299µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:56286" resp=200
... skipping 8 lines ...
I0906 20:23:27.626496       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:23:27.632583       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:23:27.634533       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 0 items received
I0906 20:23:27.646641       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:23:27.737091       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:23:27.737145       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" with version 1185
I0906 20:23:27.737180       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: phase: Failed, bound to: "azuredisk-8081/pvc-h45zl (uid: 95392d8c-d1a7-4c9b-a955-e5df803b4111)", boundByController: true
I0906 20:23:27.737239       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is bound to claim azuredisk-8081/pvc-h45zl
I0906 20:23:27.737257       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: claim azuredisk-8081/pvc-h45zl not found
I0906 20:23:27.737267       1 pv_controller.go:1108] reclaimVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: policy is Delete
I0906 20:23:27.737299       1 pv_controller.go:1753] scheduleOperation[delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]]
I0906 20:23:27.737334       1 pv_controller.go:1232] deleteVolumeOperation [pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111] started
I0906 20:23:27.749411       1 pv_controller.go:1341] isVolumeReleased[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is released
I0906 20:23:27.749430       1 pv_controller.go:1405] doDeleteVolume [pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]
I0906 20:23:27.846909       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0906 20:23:28.080045       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:23:32.986123       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111
I0906 20:23:32.986160       1 pv_controller.go:1436] volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" deleted
I0906 20:23:32.986173       1 pv_controller.go:1284] deleteVolumeOperation [pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: success
I0906 20:23:32.992244       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111" with version 1232
I0906 20:23:32.992380       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: phase: Failed, bound to: "azuredisk-8081/pvc-h45zl (uid: 95392d8c-d1a7-4c9b-a955-e5df803b4111)", boundByController: true
I0906 20:23:32.992484       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: volume is bound to claim azuredisk-8081/pvc-h45zl
I0906 20:23:32.992564       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: claim azuredisk-8081/pvc-h45zl not found
I0906 20:23:32.992634       1 pv_controller.go:1108] reclaimVolume[pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111]: policy is Delete
I0906 20:23:32.992657       1 pv_controller.go:1753] scheduleOperation[delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]]
I0906 20:23:32.992666       1 pv_controller.go:1764] operation "delete-pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111[7244f5c7-2c35-48a9-ac05-62234ea9f523]" is already running, skipping
I0906 20:23:32.992689       1 pv_protection_controller.go:205] Got event on PV pvc-95392d8c-d1a7-4c9b-a955-e5df803b4111
... skipping 43 lines ...
I0906 20:23:37.352508       1 pv_controller.go:1764] operation "provision-azuredisk-495/pvc-njww6[d4896b07-d8a7-4f29-a4dc-7aa5812399b2]" is already running, skipping
I0906 20:23:37.352634       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-495/pvc-njww6"
I0906 20:23:37.352884       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-495/pvc-njww6" with version 1264
I0906 20:23:37.357130       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 StorageAccountType:StandardSSD_LRS Size:10
I0906 20:23:39.252433       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0906 20:23:39.273612       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-x6sr8, uid 4a4007b3-3263-4118-b73e-0651657088fc, event type delete
E0906 20:23:39.299068       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-s2cmj" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0906 20:23:39.325568       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-h7gdv.17125f18132f51f2, uid 66b04519-0e7f-4aa5-9b1d-a484acbb4675, event type delete
I0906 20:23:39.329804       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-h7gdv.17125f1985b1e4a9, uid b4cd5901-1881-4a2f-bcdb-8426350ef3f6, event type delete
I0906 20:23:39.332149       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-h7gdv.17125f1a891e94b4, uid 30aeec88-bc13-45b1-a614-d2ed142483f2, event type delete
I0906 20:23:39.334922       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-h7gdv.17125f1ac811c4b4, uid a47942cc-e4d8-40db-a334-41a2fb0fe2b7, event type delete
I0906 20:23:39.337305       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-h7gdv.17125f1acdf62436, uid 1b832eee-b048-4474-946d-436a4cf0fcb0, event type delete
I0906 20:23:39.344904       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-h7gdv.17125f1ad4b712a3, uid 309bd97e-232f-4afa-9114-ac5d6da4ce10, event type delete
... skipping 85 lines ...
I0906 20:23:40.406475       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (168.219804ms)
I0906 20:23:40.438749       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" to node "capz-hx45zt-mp-0000001".
I0906 20:23:40.472491       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" lun 0 to node "capz-hx45zt-mp-0000001".
I0906 20:23:40.472530       1 azure_controller_vmss.go:101] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - attach disk(capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) with DiskEncryptionSetID()
I0906 20:23:41.276159       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5089
I0906 20:23:41.329115       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5089, name default-token-bwpls, uid 07f237ff-57a2-43c4-afdb-d4a6f811e7ae, event type delete
E0906 20:23:41.356701       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5089/default: secrets "default-token-cxhhb" is forbidden: unable to create new content in namespace azuredisk-5089 because it is being terminated
I0906 20:23:41.396205       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5089/default), service account deleted, removing tokens
I0906 20:23:41.396766       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5089, name default, uid 56b59ada-ff55-41c8-82e6-8f6c3b4b166e, event type delete
I0906 20:23:41.396790       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5089" (2.7µs)
I0906 20:23:41.433578       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5089, name kube-root-ca.crt, uid ebccaea8-3942-4cbf-a23d-0c5b6742bba2, event type delete
I0906 20:23:41.436784       1 publisher.go:181] Finished syncing namespace "azuredisk-5089" (3.154764ms)
I0906 20:23:41.452435       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5089" (1.6µs)
... skipping 119 lines ...
I0906 20:23:56.485874       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]]
I0906 20:23:56.485922       1 pv_controller.go:1764] operation "delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]" is already running, skipping
I0906 20:23:56.485959       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2] started
I0906 20:23:56.485462       1 pv_protection_controller.go:205] Got event on PV pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2
I0906 20:23:56.487581       1 pv_controller.go:1341] isVolumeReleased[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is released
I0906 20:23:56.487723       1 pv_controller.go:1405] doDeleteVolume [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]
I0906 20:23:56.508445       1 pv_controller.go:1260] deletion of volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:23:56.508464       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: set phase Failed
I0906 20:23:56.508472       1 pv_controller.go:858] updating PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: set phase Failed
I0906 20:23:56.511334       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" with version 1338
I0906 20:23:56.511385       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase: Failed, bound to: "azuredisk-495/pvc-njww6 (uid: d4896b07-d8a7-4f29-a4dc-7aa5812399b2)", boundByController: true
I0906 20:23:56.511410       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is bound to claim azuredisk-495/pvc-njww6
I0906 20:23:56.511428       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: claim azuredisk-495/pvc-njww6 not found
I0906 20:23:56.511435       1 pv_controller.go:1108] reclaimVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: policy is Delete
I0906 20:23:56.511465       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]]
I0906 20:23:56.511473       1 pv_controller.go:1764] operation "delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]" is already running, skipping
I0906 20:23:56.511488       1 pv_protection_controller.go:205] Got event on PV pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2
I0906 20:23:56.512279       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" with version 1338
I0906 20:23:56.512302       1 pv_controller.go:879] volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" entered phase "Failed"
I0906 20:23:56.512338       1 pv_controller.go:901] volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:23:56.512422       1 goroutinemap.go:150] Operation for "delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]" failed. No retries permitted until 2022-09-06 20:23:57.01236233 +0000 UTC m=+341.537002051 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:23:56.512537       1 event.go:291] "Event occurred" object="pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:23:57.604143       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:23:57.648258       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:23:57.738597       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:23:57.738687       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" with version 1338
I0906 20:23:57.738752       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase: Failed, bound to: "azuredisk-495/pvc-njww6 (uid: d4896b07-d8a7-4f29-a4dc-7aa5812399b2)", boundByController: true
I0906 20:23:57.738803       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is bound to claim azuredisk-495/pvc-njww6
I0906 20:23:57.738824       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: claim azuredisk-495/pvc-njww6 not found
I0906 20:23:57.738837       1 pv_controller.go:1108] reclaimVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: policy is Delete
I0906 20:23:57.738870       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]]
I0906 20:23:57.738925       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2] started
I0906 20:23:57.743706       1 pv_controller.go:1341] isVolumeReleased[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is released
I0906 20:23:57.743724       1 pv_controller.go:1405] doDeleteVolume [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]
I0906 20:23:57.763427       1 pv_controller.go:1260] deletion of volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:23:57.763447       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: set phase Failed
I0906 20:23:57.763458       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase Failed already set
E0906 20:23:57.763511       1 goroutinemap.go:150] Operation for "delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]" failed. No retries permitted until 2022-09-06 20:23:58.76346559 +0000 UTC m=+343.288105311 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:23:58.094246       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:23:58.591778       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 13 items received
I0906 20:23:58.618385       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 97 items received
I0906 20:24:04.409684       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="66.099µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:35296" resp=200
I0906 20:24:06.704293       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
I0906 20:24:06.704330       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 to the node "capz-hx45zt-mp-0000001" mounted false
... skipping 9 lines ...
I0906 20:24:07.627461       1 gc_controller.go:161] GC'ing orphaned
I0906 20:24:07.627488       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:24:07.676181       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000001 ReadyCondition updated. Updating timestamp.
I0906 20:24:12.649024       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:24:12.739439       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:24:12.739518       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" with version 1338
I0906 20:24:12.739611       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase: Failed, bound to: "azuredisk-495/pvc-njww6 (uid: d4896b07-d8a7-4f29-a4dc-7aa5812399b2)", boundByController: true
I0906 20:24:12.739675       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is bound to claim azuredisk-495/pvc-njww6
I0906 20:24:12.739701       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: claim azuredisk-495/pvc-njww6 not found
I0906 20:24:12.739745       1 pv_controller.go:1108] reclaimVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: policy is Delete
I0906 20:24:12.739763       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]]
I0906 20:24:12.739819       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2] started
I0906 20:24:12.743706       1 pv_controller.go:1341] isVolumeReleased[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is released
I0906 20:24:12.743725       1 pv_controller.go:1405] doDeleteVolume [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]
I0906 20:24:12.743786       1 pv_controller.go:1260] deletion of volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) since it's in attaching or detaching state
I0906 20:24:12.743818       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: set phase Failed
I0906 20:24:12.743867       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase Failed already set
E0906 20:24:12.743917       1 goroutinemap.go:150] Operation for "delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]" failed. No retries permitted until 2022-09-06 20:24:14.743889927 +0000 UTC m=+359.268529548 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) since it's in attaching or detaching state"
I0906 20:24:14.409605       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="64.3µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44308" resp=200
I0906 20:24:14.616614       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0906 20:24:21.616439       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0906 20:24:22.171382       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) returned with <nil>
I0906 20:24:22.171433       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2) succeeded
I0906 20:24:22.171442       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2 was detached from node:capz-hx45zt-mp-0000001
... skipping 5 lines ...
I0906 20:24:27.604980       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:24:27.628337       1 gc_controller.go:161] GC'ing orphaned
I0906 20:24:27.628363       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:24:27.649613       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:24:27.739571       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:24:27.739665       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" with version 1338
I0906 20:24:27.739752       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase: Failed, bound to: "azuredisk-495/pvc-njww6 (uid: d4896b07-d8a7-4f29-a4dc-7aa5812399b2)", boundByController: true
I0906 20:24:27.739828       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is bound to claim azuredisk-495/pvc-njww6
I0906 20:24:27.739877       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: claim azuredisk-495/pvc-njww6 not found
I0906 20:24:27.739892       1 pv_controller.go:1108] reclaimVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: policy is Delete
I0906 20:24:27.739921       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]]
I0906 20:24:27.739976       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2] started
I0906 20:24:27.751788       1 pv_controller.go:1341] isVolumeReleased[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is released
... skipping 3 lines ...
I0906 20:24:29.749502       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-control-plane-nhv8k"
I0906 20:24:32.679891       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-control-plane-nhv8k ReadyCondition updated. Updating timestamp.
I0906 20:24:32.934156       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2
I0906 20:24:32.934185       1 pv_controller.go:1436] volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" deleted
I0906 20:24:32.934356       1 pv_controller.go:1284] deleteVolumeOperation [pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: success
I0906 20:24:32.939276       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2" with version 1394
I0906 20:24:32.939609       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: phase: Failed, bound to: "azuredisk-495/pvc-njww6 (uid: d4896b07-d8a7-4f29-a4dc-7aa5812399b2)", boundByController: true
I0906 20:24:32.939795       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: volume is bound to claim azuredisk-495/pvc-njww6
I0906 20:24:32.939966       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: claim azuredisk-495/pvc-njww6 not found
I0906 20:24:32.940121       1 pv_controller.go:1108] reclaimVolume[pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2]: policy is Delete
I0906 20:24:32.940403       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2[f8842132-d0fb-4b86-a5c9-a0d4136da10c]]
I0906 20:24:32.939534       1 pv_protection_controller.go:205] Got event on PV pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2
I0906 20:24:32.940743       1 pv_protection_controller.go:125] Processing PV pvc-d4896b07-d8a7-4f29-a4dc-7aa5812399b2
... skipping 249 lines ...
I0906 20:24:53.770060       1 pv_controller.go:1108] reclaimVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: policy is Delete
I0906 20:24:53.770110       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]]
I0906 20:24:53.770193       1 pv_controller.go:1764] operation "delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]" is already running, skipping
I0906 20:24:53.770287       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362] started
I0906 20:24:53.771712       1 pv_controller.go:1341] isVolumeReleased[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is released
I0906 20:24:53.771745       1 pv_controller.go:1405] doDeleteVolume [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]
I0906 20:24:53.793453       1 pv_controller.go:1260] deletion of volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:24:53.793470       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: set phase Failed
I0906 20:24:53.793478       1 pv_controller.go:858] updating PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: set phase Failed
I0906 20:24:53.796248       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" with version 1479
I0906 20:24:53.796354       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: phase: Failed, bound to: "azuredisk-2888/pvc-75255 (uid: ebc50be2-3be9-40b6-a557-bf81eb8a0362)", boundByController: true
I0906 20:24:53.796413       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is bound to claim azuredisk-2888/pvc-75255
I0906 20:24:53.796451       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: claim azuredisk-2888/pvc-75255 not found
I0906 20:24:53.796461       1 pv_controller.go:1108] reclaimVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: policy is Delete
I0906 20:24:53.796506       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]]
I0906 20:24:53.796515       1 pv_controller.go:1764] operation "delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]" is already running, skipping
I0906 20:24:53.796257       1 pv_protection_controller.go:205] Got event on PV pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362
I0906 20:24:53.797135       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" with version 1479
I0906 20:24:53.797290       1 pv_controller.go:879] volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" entered phase "Failed"
I0906 20:24:53.797305       1 pv_controller.go:901] volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:24:53.797380       1 goroutinemap.go:150] Operation for "delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]" failed. No retries permitted until 2022-09-06 20:24:54.297327787 +0000 UTC m=+398.821967408 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:24:53.797465       1 event.go:291] "Event occurred" object="pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:24:54.409896       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.299µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:41088" resp=200
I0906 20:24:54.598200       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 486 items received
I0906 20:24:56.635389       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0906 20:24:56.738794       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
I0906 20:24:56.740786       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 to the node "capz-hx45zt-mp-0000001" mounted false
... skipping 8 lines ...
I0906 20:24:56.875964       1 azure_controller_vmss.go:175] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362)
I0906 20:24:57.605600       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:24:57.651404       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:24:57.683945       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000001 ReadyCondition updated. Updating timestamp.
I0906 20:24:57.740782       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:24:57.740842       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" with version 1479
I0906 20:24:57.740919       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: phase: Failed, bound to: "azuredisk-2888/pvc-75255 (uid: ebc50be2-3be9-40b6-a557-bf81eb8a0362)", boundByController: true
I0906 20:24:57.740979       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is bound to claim azuredisk-2888/pvc-75255
I0906 20:24:57.741006       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: claim azuredisk-2888/pvc-75255 not found
I0906 20:24:57.741041       1 pv_controller.go:1108] reclaimVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: policy is Delete
I0906 20:24:57.741062       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]]
I0906 20:24:57.741123       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362] started
I0906 20:24:57.745142       1 pv_controller.go:1341] isVolumeReleased[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is released
I0906 20:24:57.745160       1 pv_controller.go:1405] doDeleteVolume [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]
I0906 20:24:57.745192       1 pv_controller.go:1260] deletion of volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) since it's in attaching or detaching state
I0906 20:24:57.745204       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: set phase Failed
I0906 20:24:57.745214       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: phase Failed already set
E0906 20:24:57.745249       1 goroutinemap.go:150] Operation for "delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]" failed. No retries permitted until 2022-09-06 20:24:58.745222804 +0000 UTC m=+403.269862625 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) since it's in attaching or detaching state"
I0906 20:24:58.057830       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0906 20:24:58.120879       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:25:02.604693       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0906 20:25:04.410275       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="53.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:39588" resp=200
I0906 20:25:07.611826       1 controller.go:272] Triggering nodeSync
I0906 20:25:07.611859       1 controller.go:291] nodeSync has been triggered
... skipping 8 lines ...
I0906 20:25:12.206078       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362) succeeded
I0906 20:25:12.206087       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362 was detached from node:capz-hx45zt-mp-0000001
I0906 20:25:12.206139       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362") on node "capz-hx45zt-mp-0000001" 
I0906 20:25:12.651818       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:25:12.741204       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:25:12.741265       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" with version 1479
I0906 20:25:12.741300       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: phase: Failed, bound to: "azuredisk-2888/pvc-75255 (uid: ebc50be2-3be9-40b6-a557-bf81eb8a0362)", boundByController: true
I0906 20:25:12.741399       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is bound to claim azuredisk-2888/pvc-75255
I0906 20:25:12.741463       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: claim azuredisk-2888/pvc-75255 not found
I0906 20:25:12.741480       1 pv_controller.go:1108] reclaimVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: policy is Delete
I0906 20:25:12.741525       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]]
I0906 20:25:12.741577       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362] started
I0906 20:25:12.744254       1 pv_controller.go:1341] isVolumeReleased[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is released
I0906 20:25:12.744273       1 pv_controller.go:1405] doDeleteVolume [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]
I0906 20:25:14.410434       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="72.499µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:34496" resp=200
I0906 20:25:17.925023       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362
I0906 20:25:17.925054       1 pv_controller.go:1436] volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" deleted
I0906 20:25:17.925068       1 pv_controller.go:1284] deleteVolumeOperation [pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: success
I0906 20:25:17.936968       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362" with version 1517
I0906 20:25:17.937033       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: phase: Failed, bound to: "azuredisk-2888/pvc-75255 (uid: ebc50be2-3be9-40b6-a557-bf81eb8a0362)", boundByController: true
I0906 20:25:17.937060       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: volume is bound to claim azuredisk-2888/pvc-75255
I0906 20:25:17.937107       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: claim azuredisk-2888/pvc-75255 not found
I0906 20:25:17.937116       1 pv_controller.go:1108] reclaimVolume[pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362]: policy is Delete
I0906 20:25:17.937130       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]]
I0906 20:25:17.937137       1 pv_controller.go:1764] operation "delete-pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362[5702785f-abb4-402f-aca8-728c9f074c91]" is already running, skipping
I0906 20:25:17.937153       1 pv_protection_controller.go:205] Got event on PV pvc-ebc50be2-3be9-40b6-a557-bf81eb8a0362
... skipping 116 lines ...
I0906 20:25:24.513710       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2888, name azuredisk-volume-tester-k8kv8.17125f3498972bca, uid 4e68f15a-d952-4e60-8d57-782422e64bbc, event type delete
I0906 20:25:24.517310       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2888, name pvc-75255.17125f3187b0ff54, uid 13e9b208-024f-4f2f-8cb9-ec949009b037, event type delete
I0906 20:25:24.521860       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2888, name pvc-75255.17125f321b6699cc, uid 2cb6f61d-c281-4f55-b08f-d9ae32087ad9, event type delete
I0906 20:25:24.570538       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2888, name kube-root-ca.crt, uid 76364b16-12fe-4ce3-9b7d-46f106e2990c, event type delete
I0906 20:25:24.575629       1 publisher.go:181] Finished syncing namespace "azuredisk-2888" (5.045342ms)
I0906 20:25:24.587932       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2888, name default-token-zfxfn, uid e4b62382-807e-4127-8cb0-2b617e66dc81, event type delete
E0906 20:25:24.605092       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2888/default: secrets "default-token-rfvcf" is forbidden: unable to create new content in namespace azuredisk-2888 because it is being terminated
I0906 20:25:24.625392       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2888, name default, uid 2c5db484-662a-4fea-b2b0-85c9e6caa08b, event type delete
I0906 20:25:24.625510       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2888/default), service account deleted, removing tokens
I0906 20:25:24.625595       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2888" (1.9µs)
I0906 20:25:24.652778       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2888, estimate: 0, errors: <nil>
I0906 20:25:24.654247       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2888" (2.5µs)
I0906 20:25:24.663696       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2888" (210.371787ms)
... skipping 118 lines ...
I0906 20:25:35.554271       1 pv_controller.go:1108] reclaimVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: policy is Delete
I0906 20:25:35.554367       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]]
I0906 20:25:35.554464       1 pv_controller.go:1764] operation "delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]" is already running, skipping
I0906 20:25:35.554583       1 pv_protection_controller.go:205] Got event on PV pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d
I0906 20:25:35.555466       1 pv_controller.go:1341] isVolumeReleased[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is released
I0906 20:25:35.555482       1 pv_controller.go:1405] doDeleteVolume [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]
I0906 20:25:35.590392       1 pv_controller.go:1260] deletion of volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:25:35.590418       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: set phase Failed
I0906 20:25:35.590427       1 pv_controller.go:858] updating PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: set phase Failed
I0906 20:25:35.594082       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" with version 1598
I0906 20:25:35.594110       1 pv_controller.go:879] volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" entered phase "Failed"
I0906 20:25:35.594120       1 pv_controller.go:901] volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:25:35.594167       1 goroutinemap.go:150] Operation for "delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]" failed. No retries permitted until 2022-09-06 20:25:36.094139414 +0000 UTC m=+440.618779035 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:25:35.594382       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" with version 1598
I0906 20:25:35.594643       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase: Failed, bound to: "azuredisk-5429/pvc-xr629 (uid: fe6543f2-1b01-4fb9-9718-f047abc5c34d)", boundByController: true
I0906 20:25:35.594791       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is bound to claim azuredisk-5429/pvc-xr629
I0906 20:25:35.594944       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: claim azuredisk-5429/pvc-xr629 not found
I0906 20:25:35.602936       1 pv_controller.go:1108] reclaimVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: policy is Delete
I0906 20:25:35.602987       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]]
I0906 20:25:35.602999       1 pv_controller.go:1766] operation "delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]" postponed due to exponential backoff
I0906 20:25:35.594542       1 pv_protection_controller.go:205] Got event on PV pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d
I0906 20:25:35.594581       1 event.go:291] "Event occurred" object="pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:25:38.711405       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:25:40.333983       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:25:42.653598       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:25:42.742970       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:25:42.743024       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" with version 1598
I0906 20:25:42.743225       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase: Failed, bound to: "azuredisk-5429/pvc-xr629 (uid: fe6543f2-1b01-4fb9-9718-f047abc5c34d)", boundByController: true
I0906 20:25:42.743260       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is bound to claim azuredisk-5429/pvc-xr629
I0906 20:25:42.743286       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: claim azuredisk-5429/pvc-xr629 not found
I0906 20:25:42.743297       1 pv_controller.go:1108] reclaimVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: policy is Delete
I0906 20:25:42.743311       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]]
I0906 20:25:42.743345       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d] started
I0906 20:25:42.747098       1 pv_controller.go:1341] isVolumeReleased[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is released
I0906 20:25:42.747116       1 pv_controller.go:1405] doDeleteVolume [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]
I0906 20:25:42.766116       1 pv_controller.go:1260] deletion of volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:25:42.766141       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: set phase Failed
I0906 20:25:42.766150       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase Failed already set
E0906 20:25:42.766340       1 goroutinemap.go:150] Operation for "delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]" failed. No retries permitted until 2022-09-06 20:25:43.766158164 +0000 UTC m=+448.290797785 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:25:43.346418       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:25:44.110616       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 110 items received
I0906 20:25:44.409636       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="56.1µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:34906" resp=200
I0906 20:25:45.110512       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 68 items received
I0906 20:25:45.669362       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0906 20:25:46.782574       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
... skipping 15 lines ...
I0906 20:25:54.410293       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="59µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:34962" resp=200
I0906 20:25:56.626064       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 26 items received
I0906 20:25:57.606051       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:25:57.654680       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:25:57.743703       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:25:57.743764       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" with version 1598
I0906 20:25:57.743802       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase: Failed, bound to: "azuredisk-5429/pvc-xr629 (uid: fe6543f2-1b01-4fb9-9718-f047abc5c34d)", boundByController: true
I0906 20:25:57.743832       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is bound to claim azuredisk-5429/pvc-xr629
I0906 20:25:57.743849       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: claim azuredisk-5429/pvc-xr629 not found
I0906 20:25:57.743857       1 pv_controller.go:1108] reclaimVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: policy is Delete
I0906 20:25:57.743873       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]]
I0906 20:25:57.743898       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d] started
I0906 20:25:57.754137       1 pv_controller.go:1341] isVolumeReleased[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is released
I0906 20:25:57.754153       1 pv_controller.go:1405] doDeleteVolume [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]
I0906 20:25:57.754299       1 pv_controller.go:1260] deletion of volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) since it's in attaching or detaching state
I0906 20:25:57.754317       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: set phase Failed
I0906 20:25:57.754327       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase Failed already set
E0906 20:25:57.754425       1 goroutinemap.go:150] Operation for "delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]" failed. No retries permitted until 2022-09-06 20:25:59.75439783 +0000 UTC m=+464.279037451 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) since it's in attaching or detaching state"
I0906 20:25:58.146377       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:25:59.859911       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0906 20:26:02.154596       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) returned with <nil>
I0906 20:26:02.154640       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d) succeeded
I0906 20:26:02.154801       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d was detached from node:capz-hx45zt-mp-0000001
I0906 20:26:02.154830       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d") on node "capz-hx45zt-mp-0000001" 
... skipping 5 lines ...
I0906 20:26:12.618065       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0906 20:26:12.622759       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 28 items received
I0906 20:26:12.654967       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:26:12.695534       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000000 ReadyCondition updated. Updating timestamp.
I0906 20:26:12.744305       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:26:12.744370       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" with version 1598
I0906 20:26:12.744405       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase: Failed, bound to: "azuredisk-5429/pvc-xr629 (uid: fe6543f2-1b01-4fb9-9718-f047abc5c34d)", boundByController: true
I0906 20:26:12.744439       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is bound to claim azuredisk-5429/pvc-xr629
I0906 20:26:12.744458       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: claim azuredisk-5429/pvc-xr629 not found
I0906 20:26:12.744466       1 pv_controller.go:1108] reclaimVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: policy is Delete
I0906 20:26:12.744483       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]]
I0906 20:26:12.744511       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d] started
I0906 20:26:12.748364       1 pv_controller.go:1341] isVolumeReleased[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is released
I0906 20:26:12.748385       1 pv_controller.go:1405] doDeleteVolume [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]
I0906 20:26:14.410024       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="57.8µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:32888" resp=200
I0906 20:26:17.925251       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d
I0906 20:26:17.925285       1 pv_controller.go:1436] volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" deleted
I0906 20:26:17.925298       1 pv_controller.go:1284] deleteVolumeOperation [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: success
I0906 20:26:17.937493       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d" with version 1662
I0906 20:26:17.937737       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: phase: Failed, bound to: "azuredisk-5429/pvc-xr629 (uid: fe6543f2-1b01-4fb9-9718-f047abc5c34d)", boundByController: true
I0906 20:26:17.937772       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: volume is bound to claim azuredisk-5429/pvc-xr629
I0906 20:26:17.937795       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: claim azuredisk-5429/pvc-xr629 not found
I0906 20:26:17.937805       1 pv_controller.go:1108] reclaimVolume[pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d]: policy is Delete
I0906 20:26:17.937821       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d[3925be72-e1e0-4bf2-819f-0bcf240780f9]]
I0906 20:26:17.937847       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d] started
I0906 20:26:17.937649       1 pv_protection_controller.go:205] Got event on PV pvc-fe6543f2-1b01-4fb9-9718-f047abc5c34d
... skipping 110 lines ...
I0906 20:26:25.790418       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" to node "capz-hx45zt-mp-0000001".
I0906 20:26:25.825497       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" lun 0 to node "capz-hx45zt-mp-0000001".
I0906 20:26:25.825538       1 azure_controller_vmss.go:101] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - attach disk(capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) with DiskEncryptionSetID()
I0906 20:26:26.523630       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5429
I0906 20:26:26.538617       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5429, name default-token-tbfdp, uid e3855869-8429-49c2-9cd2-f51cf8b00829, event type delete
I0906 20:26:26.562825       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name azuredisk-volume-tester-qp2bq.17125f3bf5f0f7ad, uid c79ea5df-a45c-435b-87b6-73de51539522, event type delete
E0906 20:26:26.565147       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5429/default: secrets "default-token-2f4cv" is forbidden: unable to create new content in namespace azuredisk-5429 because it is being terminated
I0906 20:26:26.566982       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name azuredisk-volume-tester-qp2bq.17125f3d4351e186, uid e4b20307-4d2c-4435-ae75-e06e19d3d49c, event type delete
I0906 20:26:26.569438       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name azuredisk-volume-tester-qp2bq.17125f3e6b93254d, uid 483cea39-b117-42e0-94e2-88e0990655ad, event type delete
I0906 20:26:26.572804       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name azuredisk-volume-tester-qp2bq.17125f3e6d99a700, uid 3e81afe0-93df-48b7-8aa0-0dad4be105c6, event type delete
I0906 20:26:26.575815       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name azuredisk-volume-tester-qp2bq.17125f3e73dcda93, uid b7d17f6f-59aa-4090-95f1-e601c5e56fb1, event type delete
I0906 20:26:26.578233       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name pvc-xr629.17125f3b3e9481c2, uid ab93d776-5e10-4409-815a-5c98770306ba, event type delete
I0906 20:26:26.581760       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5429, name pvc-xr629.17125f3bd34295f2, uid 29462a14-52a7-41f1-a6ec-827512faa633, event type delete
... skipping 756 lines ...
I0906 20:27:50.729900       1 pv_controller.go:1108] reclaimVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: policy is Delete
I0906 20:27:50.730024       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]]
I0906 20:27:50.730071       1 pv_controller.go:1764] operation "delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]" is already running, skipping
I0906 20:27:50.729368       1 pv_protection_controller.go:205] Got event on PV pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1
I0906 20:27:50.731042       1 pv_controller.go:1341] isVolumeReleased[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: volume is released
I0906 20:27:50.731057       1 pv_controller.go:1405] doDeleteVolume [pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]
I0906 20:27:50.731084       1 pv_controller.go:1260] deletion of volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1) since it's in attaching or detaching state
I0906 20:27:50.731095       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: set phase Failed
I0906 20:27:50.731103       1 pv_controller.go:858] updating PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: set phase Failed
I0906 20:27:50.733345       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" with version 1901
I0906 20:27:50.733544       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: phase: Failed, bound to: "azuredisk-3090/pvc-lrr98 (uid: 83283382-47c1-46cb-bed7-d60fb4ee95b1)", boundByController: true
I0906 20:27:50.733720       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: volume is bound to claim azuredisk-3090/pvc-lrr98
I0906 20:27:50.733863       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: claim azuredisk-3090/pvc-lrr98 not found
I0906 20:27:50.733962       1 pv_controller.go:1108] reclaimVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: policy is Delete
I0906 20:27:50.734087       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]]
I0906 20:27:50.734196       1 pv_controller.go:1764] operation "delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]" is already running, skipping
I0906 20:27:50.733504       1 pv_protection_controller.go:205] Got event on PV pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1
I0906 20:27:50.734677       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" with version 1901
I0906 20:27:50.734703       1 pv_controller.go:879] volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" entered phase "Failed"
I0906 20:27:50.734712       1 pv_controller.go:901] volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1) since it's in attaching or detaching state
E0906 20:27:50.734826       1 goroutinemap.go:150] Operation for "delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]" failed. No retries permitted until 2022-09-06 20:27:51.234734374 +0000 UTC m=+575.759374095 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1) since it's in attaching or detaching state"
I0906 20:27:50.735154       1 event.go:291] "Event occurred" object="pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1) since it's in attaching or detaching state"
I0906 20:27:54.409655       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.699µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:51110" resp=200
I0906 20:27:56.223268       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1) returned with <nil>
I0906 20:27:56.223311       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1) succeeded
I0906 20:27:56.223321       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1 was detached from node:capz-hx45zt-mp-0000000
I0906 20:27:56.223342       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1") on node "capz-hx45zt-mp-0000000" 
I0906 20:27:57.341944       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 5 lines ...
I0906 20:27:57.748689       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is bound to claim azuredisk-3090/pvc-ck5cw
I0906 20:27:57.748710       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: claim azuredisk-3090/pvc-ck5cw found: phase: Bound, bound to: "pvc-e618297b-caca-412c-a79b-8d6f019b3664", bindCompleted: true, boundByController: true
I0906 20:27:57.748725       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: all is bound
I0906 20:27:57.748753       1 pv_controller.go:858] updating PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: set phase Bound
I0906 20:27:57.748767       1 pv_controller.go:861] updating PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: phase Bound already set
I0906 20:27:57.748781       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" with version 1901
I0906 20:27:57.748801       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: phase: Failed, bound to: "azuredisk-3090/pvc-lrr98 (uid: 83283382-47c1-46cb-bed7-d60fb4ee95b1)", boundByController: true
I0906 20:27:57.748843       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: volume is bound to claim azuredisk-3090/pvc-lrr98
I0906 20:27:57.748879       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: claim azuredisk-3090/pvc-lrr98 not found
I0906 20:27:57.748908       1 pv_controller.go:1108] reclaimVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: policy is Delete
I0906 20:27:57.748975       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]]
I0906 20:27:57.749080       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-3090/pvc-bqdjc" with version 1692
I0906 20:27:57.749219       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-3090/pvc-bqdjc]: phase: Bound, bound to: "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22", bindCompleted: true, boundByController: true
... skipping 40 lines ...
I0906 20:27:58.213326       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:28:02.347690       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 29 items received
I0906 20:28:02.985322       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1
I0906 20:28:02.985395       1 pv_controller.go:1436] volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" deleted
I0906 20:28:02.985424       1 pv_controller.go:1284] deleteVolumeOperation [pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: success
I0906 20:28:02.992008       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1" with version 1920
I0906 20:28:02.992339       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: phase: Failed, bound to: "azuredisk-3090/pvc-lrr98 (uid: 83283382-47c1-46cb-bed7-d60fb4ee95b1)", boundByController: true
I0906 20:28:02.992372       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: volume is bound to claim azuredisk-3090/pvc-lrr98
I0906 20:28:02.992419       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: claim azuredisk-3090/pvc-lrr98 not found
I0906 20:28:02.992431       1 pv_controller.go:1108] reclaimVolume[pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1]: policy is Delete
I0906 20:28:02.992086       1 pv_protection_controller.go:205] Got event on PV pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1
I0906 20:28:02.992482       1 pv_protection_controller.go:125] Processing PV pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1
I0906 20:28:02.992543       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83283382-47c1-46cb-bed7-d60fb4ee95b1[2d8e5326-1206-4243-af9b-5df0e3511a23]]
... skipping 211 lines ...
I0906 20:28:40.720793       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]]
I0906 20:28:40.720805       1 pv_controller.go:1764] operation "delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]" is already running, skipping
I0906 20:28:40.720830       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e618297b-caca-412c-a79b-8d6f019b3664] started
I0906 20:28:40.720380       1 pv_protection_controller.go:205] Got event on PV pvc-e618297b-caca-412c-a79b-8d6f019b3664
I0906 20:28:40.722323       1 pv_controller.go:1341] isVolumeReleased[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is released
I0906 20:28:40.722339       1 pv_controller.go:1405] doDeleteVolume [pvc-e618297b-caca-412c-a79b-8d6f019b3664]
I0906 20:28:40.743290       1 pv_controller.go:1260] deletion of volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_0), could not be deleted
I0906 20:28:40.743314       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: set phase Failed
I0906 20:28:40.743323       1 pv_controller.go:858] updating PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: set phase Failed
I0906 20:28:40.746464       1 pv_protection_controller.go:205] Got event on PV pvc-e618297b-caca-412c-a79b-8d6f019b3664
I0906 20:28:40.746434       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" with version 1988
I0906 20:28:40.746691       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: phase: Failed, bound to: "azuredisk-3090/pvc-ck5cw (uid: e618297b-caca-412c-a79b-8d6f019b3664)", boundByController: true
I0906 20:28:40.746793       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is bound to claim azuredisk-3090/pvc-ck5cw
I0906 20:28:40.746969       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: claim azuredisk-3090/pvc-ck5cw not found
I0906 20:28:40.747122       1 pv_controller.go:1108] reclaimVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: policy is Delete
I0906 20:28:40.747276       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]]
I0906 20:28:40.747359       1 pv_controller.go:1764] operation "delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]" is already running, skipping
I0906 20:28:40.747771       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" with version 1988
I0906 20:28:40.747794       1 pv_controller.go:879] volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" entered phase "Failed"
I0906 20:28:40.747820       1 pv_controller.go:901] volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_0), could not be deleted
E0906 20:28:40.747939       1 goroutinemap.go:150] Operation for "delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]" failed. No retries permitted until 2022-09-06 20:28:41.247841873 +0000 UTC m=+625.772481494 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_0), could not be deleted"
I0906 20:28:40.748264       1 event.go:291] "Event occurred" object="pvc-e618297b-caca-412c-a79b-8d6f019b3664" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_0), could not be deleted"
I0906 20:28:40.864853       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000000"
I0906 20:28:40.864886       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664 to the node "capz-hx45zt-mp-0000000" mounted false
I0906 20:28:40.926475       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-hx45zt-mp-0000000" succeeded. VolumesAttached: []
I0906 20:28:40.926778       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664") on node "capz-hx45zt-mp-0000000" 
I0906 20:28:40.926682       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000000"
... skipping 11 lines ...
I0906 20:28:42.750425       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is bound to claim azuredisk-3090/pvc-bqdjc
I0906 20:28:42.750444       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: claim azuredisk-3090/pvc-bqdjc found: phase: Bound, bound to: "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22", bindCompleted: true, boundByController: true
I0906 20:28:42.750483       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: all is bound
I0906 20:28:42.750511       1 pv_controller.go:858] updating PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: set phase Bound
I0906 20:28:42.750529       1 pv_controller.go:861] updating PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: phase Bound already set
I0906 20:28:42.750571       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" with version 1988
I0906 20:28:42.750595       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: phase: Failed, bound to: "azuredisk-3090/pvc-ck5cw (uid: e618297b-caca-412c-a79b-8d6f019b3664)", boundByController: true
I0906 20:28:42.750648       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is bound to claim azuredisk-3090/pvc-ck5cw
I0906 20:28:42.750686       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: claim azuredisk-3090/pvc-ck5cw not found
I0906 20:28:42.750724       1 pv_controller.go:1108] reclaimVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: policy is Delete
I0906 20:28:42.750755       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]]
I0906 20:28:42.750820       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e618297b-caca-412c-a79b-8d6f019b3664] started
I0906 20:28:42.750429       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-3090/pvc-bqdjc]: phase: Bound, bound to: "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22", bindCompleted: true, boundByController: true
... skipping 10 lines ...
I0906 20:28:42.752048       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-3090/pvc-bqdjc] status: phase Bound already set
I0906 20:28:42.752196       1 pv_controller.go:1038] volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" bound to claim "azuredisk-3090/pvc-bqdjc"
I0906 20:28:42.752313       1 pv_controller.go:1039] volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" status after binding: phase: Bound, bound to: "azuredisk-3090/pvc-bqdjc (uid: 5430bcfd-07b3-4eba-a840-0f29e3cc3f22)", boundByController: true
I0906 20:28:42.752411       1 pv_controller.go:1040] claim "azuredisk-3090/pvc-bqdjc" status after binding: phase: Bound, bound to: "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22", bindCompleted: true, boundByController: true
I0906 20:28:42.754785       1 pv_controller.go:1341] isVolumeReleased[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is released
I0906 20:28:42.754800       1 pv_controller.go:1405] doDeleteVolume [pvc-e618297b-caca-412c-a79b-8d6f019b3664]
I0906 20:28:42.754851       1 pv_controller.go:1260] deletion of volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) since it's in attaching or detaching state
I0906 20:28:42.754867       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: set phase Failed
I0906 20:28:42.754878       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: phase Failed already set
E0906 20:28:42.754929       1 goroutinemap.go:150] Operation for "delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]" failed. No retries permitted until 2022-09-06 20:28:43.754886394 +0000 UTC m=+628.279526015 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) since it's in attaching or detaching state"
I0906 20:28:44.410302       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.499µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46790" resp=200
I0906 20:28:47.635566       1 gc_controller.go:161] GC'ing orphaned
I0906 20:28:47.635592       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:28:54.410111       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="55.099µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36882" resp=200
I0906 20:28:56.241062       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) returned with <nil>
I0906 20:28:56.241110       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664) succeeded
... skipping 23 lines ...
I0906 20:28:57.751869       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is bound to claim azuredisk-3090/pvc-bqdjc
I0906 20:28:57.751890       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: claim azuredisk-3090/pvc-bqdjc found: phase: Bound, bound to: "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22", bindCompleted: true, boundByController: true
I0906 20:28:57.751908       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: all is bound
I0906 20:28:57.751919       1 pv_controller.go:858] updating PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: set phase Bound
I0906 20:28:57.751929       1 pv_controller.go:861] updating PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: phase Bound already set
I0906 20:28:57.751941       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" with version 1988
I0906 20:28:57.751962       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: phase: Failed, bound to: "azuredisk-3090/pvc-ck5cw (uid: e618297b-caca-412c-a79b-8d6f019b3664)", boundByController: true
I0906 20:28:57.752002       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is bound to claim azuredisk-3090/pvc-ck5cw
I0906 20:28:57.752022       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: claim azuredisk-3090/pvc-ck5cw not found
I0906 20:28:57.752028       1 pv_controller.go:1108] reclaimVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: policy is Delete
I0906 20:28:57.752041       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]]
I0906 20:28:57.752089       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e618297b-caca-412c-a79b-8d6f019b3664] started
I0906 20:28:57.764233       1 pv_controller.go:1341] isVolumeReleased[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is released
I0906 20:28:57.764251       1 pv_controller.go:1405] doDeleteVolume [pvc-e618297b-caca-412c-a79b-8d6f019b3664]
I0906 20:28:58.246972       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:29:02.968417       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-e618297b-caca-412c-a79b-8d6f019b3664
I0906 20:29:02.968448       1 pv_controller.go:1436] volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" deleted
I0906 20:29:02.968461       1 pv_controller.go:1284] deleteVolumeOperation [pvc-e618297b-caca-412c-a79b-8d6f019b3664]: success
I0906 20:29:02.973316       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e618297b-caca-412c-a79b-8d6f019b3664" with version 2024
I0906 20:29:02.973795       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: phase: Failed, bound to: "azuredisk-3090/pvc-ck5cw (uid: e618297b-caca-412c-a79b-8d6f019b3664)", boundByController: true
I0906 20:29:02.973978       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: volume is bound to claim azuredisk-3090/pvc-ck5cw
I0906 20:29:02.973928       1 pv_protection_controller.go:205] Got event on PV pvc-e618297b-caca-412c-a79b-8d6f019b3664
I0906 20:29:02.974226       1 pv_protection_controller.go:125] Processing PV pvc-e618297b-caca-412c-a79b-8d6f019b3664
I0906 20:29:02.974118       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: claim azuredisk-3090/pvc-ck5cw not found
I0906 20:29:02.974377       1 pv_controller.go:1108] reclaimVolume[pvc-e618297b-caca-412c-a79b-8d6f019b3664]: policy is Delete
I0906 20:29:02.974415       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e618297b-caca-412c-a79b-8d6f019b3664[a301a486-1e5e-4843-bd2a-5b3e595235c6]]
... skipping 187 lines ...
I0906 20:29:46.485116       1 pv_controller.go:1108] reclaimVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: policy is Delete
I0906 20:29:46.485256       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]]
I0906 20:29:46.485390       1 pv_controller.go:1764] operation "delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]" is already running, skipping
I0906 20:29:46.484734       1 pv_protection_controller.go:205] Got event on PV pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22
I0906 20:29:46.486592       1 pv_controller.go:1341] isVolumeReleased[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is released
I0906 20:29:46.486606       1 pv_controller.go:1405] doDeleteVolume [pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]
I0906 20:29:46.486636       1 pv_controller.go:1260] deletion of volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) since it's in attaching or detaching state
I0906 20:29:46.486723       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: set phase Failed
I0906 20:29:46.486732       1 pv_controller.go:858] updating PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: set phase Failed
I0906 20:29:46.489268       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" with version 2104
I0906 20:29:46.489433       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: phase: Failed, bound to: "azuredisk-3090/pvc-bqdjc (uid: 5430bcfd-07b3-4eba-a840-0f29e3cc3f22)", boundByController: true
I0906 20:29:46.489570       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is bound to claim azuredisk-3090/pvc-bqdjc
I0906 20:29:46.489686       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: claim azuredisk-3090/pvc-bqdjc not found
I0906 20:29:46.490056       1 pv_controller.go:1108] reclaimVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: policy is Delete
I0906 20:29:46.490204       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]]
I0906 20:29:46.490338       1 pv_controller.go:1764] operation "delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]" is already running, skipping
I0906 20:29:46.489360       1 pv_protection_controller.go:205] Got event on PV pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22
I0906 20:29:46.489908       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" with version 2104
I0906 20:29:46.490656       1 pv_controller.go:879] volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" entered phase "Failed"
I0906 20:29:46.490821       1 pv_controller.go:901] volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) since it's in attaching or detaching state
E0906 20:29:46.491007       1 goroutinemap.go:150] Operation for "delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]" failed. No retries permitted until 2022-09-06 20:29:46.990955145 +0000 UTC m=+691.515594866 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) since it's in attaching or detaching state"
I0906 20:29:46.491150       1 event.go:291] "Event occurred" object="pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) since it's in attaching or detaching state"
I0906 20:29:47.637953       1 gc_controller.go:161] GC'ing orphaned
I0906 20:29:47.637986       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:29:52.361839       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) returned with <nil>
I0906 20:29:52.361880       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22) succeeded
I0906 20:29:52.361890       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22 was detached from node:capz-hx45zt-mp-0000001
I0906 20:29:52.361913       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22") on node "capz-hx45zt-mp-0000001" 
I0906 20:29:54.412097       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="85.899µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:49370" resp=200
I0906 20:29:57.610158       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:29:57.663774       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:29:57.753939       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:29:57.754087       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" with version 2104
I0906 20:29:57.754161       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: phase: Failed, bound to: "azuredisk-3090/pvc-bqdjc (uid: 5430bcfd-07b3-4eba-a840-0f29e3cc3f22)", boundByController: true
I0906 20:29:57.754204       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is bound to claim azuredisk-3090/pvc-bqdjc
I0906 20:29:57.754227       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: claim azuredisk-3090/pvc-bqdjc not found
I0906 20:29:57.754237       1 pv_controller.go:1108] reclaimVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: policy is Delete
I0906 20:29:57.754256       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]]
I0906 20:29:57.754290       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22] started
I0906 20:29:57.765404       1 pv_controller.go:1341] isVolumeReleased[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is released
I0906 20:29:57.765424       1 pv_controller.go:1405] doDeleteVolume [pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]
I0906 20:29:58.281795       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:30:02.984668       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22
I0906 20:30:02.984697       1 pv_controller.go:1436] volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" deleted
I0906 20:30:02.984709       1 pv_controller.go:1284] deleteVolumeOperation [pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: success
I0906 20:30:02.989892       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22" with version 2129
I0906 20:30:02.989993       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: phase: Failed, bound to: "azuredisk-3090/pvc-bqdjc (uid: 5430bcfd-07b3-4eba-a840-0f29e3cc3f22)", boundByController: true
I0906 20:30:02.990066       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: volume is bound to claim azuredisk-3090/pvc-bqdjc
I0906 20:30:02.990121       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: claim azuredisk-3090/pvc-bqdjc not found
I0906 20:30:02.990154       1 pv_controller.go:1108] reclaimVolume[pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22]: policy is Delete
I0906 20:30:02.990198       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22[fbd62e15-e2a7-4498-b14f-0e0ef60062f0]]
I0906 20:30:02.990265       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22] started
I0906 20:30:02.989907       1 pv_protection_controller.go:205] Got event on PV pvc-5430bcfd-07b3-4eba-a840-0f29e3cc3f22
... skipping 45 lines ...
I0906 20:30:07.619486       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c-x7rpz"
I0906 20:30:07.625540       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c"
I0906 20:30:07.625902       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c" (14.272653ms)
I0906 20:30:07.626072       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c", timestamp:time.Time{wall:0xc0be0953e4758724, ext:712136321805, loc:(*time.Location)(0x731ea80)}}
I0906 20:30:07.626400       1 replica_set_utils.go:59] Updating status for : azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0906 20:30:07.629737       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-6159/azuredisk-volume-tester-5vl5h" duration="22.033774ms"
I0906 20:30:07.629953       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-6159/azuredisk-volume-tester-5vl5h" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-5vl5h\": the object has been modified; please apply your changes to the latest version and try again"
I0906 20:30:07.630156       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-6159/azuredisk-volume-tester-5vl5h" startTime="2022-09-06 20:30:07.630099895 +0000 UTC m=+712.154739516"
I0906 20:30:07.630921       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-5vl5h" timed out (false) [last progress check: 2022-09-06 20:30:07 +0000 UTC - now: 2022-09-06 20:30:07.630913487 +0000 UTC m=+712.155553108]
I0906 20:30:07.632880       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c"
I0906 20:30:07.632426       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-6159/pvc-bfhx6" with version 2153
I0906 20:30:07.633475       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-6159/pvc-bfhx6]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0906 20:30:07.633600       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-6159/pvc-bfhx6]: no volume found
... skipping 251 lines ...
I0906 20:30:24.993227       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-5vl5h-ffbcc576c-kz5xm, PodDisruptionBudget controller will avoid syncing.
I0906 20:30:24.993250       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-5vl5h-ffbcc576c-kz5xm"
I0906 20:30:24.993486       1 replica_set.go:439] Pod azuredisk-volume-tester-5vl5h-ffbcc576c-kz5xm updated, objectMeta {Name:azuredisk-volume-tester-5vl5h-ffbcc576c-kz5xm GenerateName:azuredisk-volume-tester-5vl5h-ffbcc576c- Namespace:azuredisk-6159 SelfLink: UID:580f83cb-d995-4279-b18f-51f9a8236770 ResourceVersion:2241 Generation:0 CreationTimestamp:2022-09-06 20:30:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-685213522303989579 pod-template-hash:ffbcc576c] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-5vl5h-ffbcc576c UID:04b84170-7a94-4be3-a1ad-b990491ad417 Controller:0xc001dc6ef7 BlockOwnerDeletion:0xc001dc6ef8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:30:24 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04b84170-7a94-4be3-a1ad-b990491ad417\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]} -> {Name:azuredisk-volume-tester-5vl5h-ffbcc576c-kz5xm GenerateName:azuredisk-volume-tester-5vl5h-ffbcc576c- Namespace:azuredisk-6159 SelfLink: UID:580f83cb-d995-4279-b18f-51f9a8236770 ResourceVersion:2244 Generation:0 CreationTimestamp:2022-09-06 20:30:24 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-685213522303989579 pod-template-hash:ffbcc576c] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-5vl5h-ffbcc576c UID:04b84170-7a94-4be3-a1ad-b990491ad417 Controller:0xc001ed87f7 BlockOwnerDeletion:0xc001ed87f8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-06 20:30:24 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"04b84170-7a94-4be3-a1ad-b990491ad417\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-06 20:30:24 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]}.
I0906 20:30:24.994150       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c", timestamp:time.Time{wall:0xc0be095837beccd8, ext:729459890881, loc:(*time.Location)(0x731ea80)}}
I0906 20:30:24.994324       1 controller_utils.go:972] Ignoring inactive pod azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c-x7rpz in state Running, deletion time 2022-09-06 20:30:54 +0000 UTC
I0906 20:30:24.994562       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c" (424.097µs)
W0906 20:30:25.055067       1 reconciler.go:385] Multi-Attach error for volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b") from node "capz-hx45zt-mp-0000001" Volume is already used by pods azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c-x7rpz on node capz-hx45zt-mp-0000000
I0906 20:30:25.055142       1 event.go:291] "Event occurred" object="azuredisk-6159/azuredisk-volume-tester-5vl5h-ffbcc576c-kz5xm" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b\" Volume is already used by pod(s) azuredisk-volume-tester-5vl5h-ffbcc576c-x7rpz"
I0906 20:30:27.011053       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
I0906 20:30:27.611183       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:30:27.639678       1 gc_controller.go:161] GC'ing orphaned
I0906 20:30:27.639704       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:30:27.665075       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:30:27.732073       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000001 ReadyCondition updated. Updating timestamp.
... skipping 400 lines ...
I0906 20:32:06.711819       1 pv_controller.go:1108] reclaimVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: policy is Delete
I0906 20:32:06.711833       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]]
I0906 20:32:06.711852       1 pv_controller.go:1764] operation "delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]" is already running, skipping
I0906 20:32:06.711868       1 pv_protection_controller.go:205] Got event on PV pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b
I0906 20:32:06.716622       1 pv_controller.go:1341] isVolumeReleased[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is released
I0906 20:32:06.716810       1 pv_controller.go:1405] doDeleteVolume [pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]
I0906 20:32:06.784587       1 pv_controller.go:1260] deletion of volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:32:06.784611       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: set phase Failed
I0906 20:32:06.784622       1 pv_controller.go:858] updating PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: set phase Failed
I0906 20:32:06.787900       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" with version 2424
I0906 20:32:06.788438       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: phase: Failed, bound to: "azuredisk-6159/pvc-bfhx6 (uid: c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b)", boundByController: true
I0906 20:32:06.788639       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is bound to claim azuredisk-6159/pvc-bfhx6
I0906 20:32:06.788691       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: claim azuredisk-6159/pvc-bfhx6 not found
I0906 20:32:06.788475       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" with version 2424
I0906 20:32:06.788777       1 pv_controller.go:879] volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" entered phase "Failed"
I0906 20:32:06.788791       1 pv_controller.go:901] volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:32:06.788845       1 goroutinemap.go:150] Operation for "delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]" failed. No retries permitted until 2022-09-06 20:32:07.288813207 +0000 UTC m=+831.813452828 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:32:06.788743       1 pv_controller.go:1108] reclaimVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: policy is Delete
I0906 20:32:06.789102       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]]
I0906 20:32:06.789303       1 pv_controller.go:1766] operation "delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]" postponed due to exponential backoff
I0906 20:32:06.788494       1 pv_protection_controller.go:205] Got event on PV pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b
I0906 20:32:06.789147       1 event.go:291] "Event occurred" object="pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:32:07.080319       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
... skipping 12 lines ...
I0906 20:32:07.744188       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-mp-0000001 ReadyCondition updated. Updating timestamp.
I0906 20:32:09.617932       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0906 20:32:12.347763       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:32:12.669233       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:32:12.763635       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:32:12.763690       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" with version 2424
I0906 20:32:12.763726       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: phase: Failed, bound to: "azuredisk-6159/pvc-bfhx6 (uid: c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b)", boundByController: true
I0906 20:32:12.763758       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is bound to claim azuredisk-6159/pvc-bfhx6
I0906 20:32:12.763780       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: claim azuredisk-6159/pvc-bfhx6 not found
I0906 20:32:12.763793       1 pv_controller.go:1108] reclaimVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: policy is Delete
I0906 20:32:12.763807       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]]
I0906 20:32:12.763834       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b] started
I0906 20:32:12.766236       1 pv_controller.go:1341] isVolumeReleased[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is released
I0906 20:32:12.766254       1 pv_controller.go:1405] doDeleteVolume [pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]
I0906 20:32:12.766287       1 pv_controller.go:1260] deletion of volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) since it's in attaching or detaching state
I0906 20:32:12.766300       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: set phase Failed
I0906 20:32:12.766312       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: phase Failed already set
E0906 20:32:12.766350       1 goroutinemap.go:150] Operation for "delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]" failed. No retries permitted until 2022-09-06 20:32:13.766326193 +0000 UTC m=+838.290965914 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) since it's in attaching or detaching state"
I0906 20:32:12.910643       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 0 items received
I0906 20:32:14.410963       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="92.599µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54562" resp=200
I0906 20:32:22.543783       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) returned with <nil>
I0906 20:32:22.543827       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b) succeeded
I0906 20:32:22.543836       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b was detached from node:capz-hx45zt-mp-0000001
I0906 20:32:22.544048       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b") on node "capz-hx45zt-mp-0000001" 
I0906 20:32:24.409772       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="63.599µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:37810" resp=200
I0906 20:32:27.612989       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:32:27.644473       1 gc_controller.go:161] GC'ing orphaned
I0906 20:32:27.644500       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:32:27.669943       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:32:27.764399       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:32:27.764457       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" with version 2424
I0906 20:32:27.764493       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: phase: Failed, bound to: "azuredisk-6159/pvc-bfhx6 (uid: c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b)", boundByController: true
I0906 20:32:27.764570       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is bound to claim azuredisk-6159/pvc-bfhx6
I0906 20:32:27.764593       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: claim azuredisk-6159/pvc-bfhx6 not found
I0906 20:32:27.764605       1 pv_controller.go:1108] reclaimVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: policy is Delete
I0906 20:32:27.764637       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]]
I0906 20:32:27.764672       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b] started
I0906 20:32:27.771750       1 pv_controller.go:1341] isVolumeReleased[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is released
... skipping 2 lines ...
I0906 20:32:28.759408       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0906 20:32:30.637403       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0906 20:32:32.991893       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b
I0906 20:32:32.991918       1 pv_controller.go:1436] volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" deleted
I0906 20:32:32.991931       1 pv_controller.go:1284] deleteVolumeOperation [pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: success
I0906 20:32:32.997945       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b" with version 2465
I0906 20:32:32.997978       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: phase: Failed, bound to: "azuredisk-6159/pvc-bfhx6 (uid: c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b)", boundByController: true
I0906 20:32:32.998032       1 pv_protection_controller.go:205] Got event on PV pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b
I0906 20:32:32.998067       1 pv_protection_controller.go:125] Processing PV pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b
I0906 20:32:32.998284       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: volume is bound to claim azuredisk-6159/pvc-bfhx6
I0906 20:32:32.998312       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: claim azuredisk-6159/pvc-bfhx6 not found
I0906 20:32:32.998415       1 pv_controller.go:1108] reclaimVolume[pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b]: policy is Delete
I0906 20:32:32.998552       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c320c1f5-bf25-4c6c-bccb-bbc6ca167c0b[917d19e1-f274-4993-b410-803b64385b64]]
... skipping 513 lines ...
I0906 20:32:58.682456       1 pv_controller.go:1038] volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" bound to claim "azuredisk-9241/pvc-xhwjr"
I0906 20:32:58.682492       1 pv_controller.go:1039] volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" status after binding: phase: Bound, bound to: "azuredisk-9241/pvc-xhwjr (uid: 5f0ed6ef-718f-4d45-b7cf-7afb103422b2)", boundByController: true
I0906 20:32:58.682509       1 pv_controller.go:1040] claim "azuredisk-9241/pvc-xhwjr" status after binding: phase: Bound, bound to: "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2", bindCompleted: true, boundByController: true
I0906 20:32:59.059614       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0906 20:32:59.352733       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7463
I0906 20:32:59.398222       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7463, name default-token-7wjrk, uid 89b90323-633d-45f3-8049-dc96df334d59, event type delete
E0906 20:32:59.418311       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7463/default: secrets "default-token-g24fm" is forbidden: unable to create new content in namespace azuredisk-7463 because it is being terminated
I0906 20:32:59.461672       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7463, name kube-root-ca.crt, uid 8a22ba09-5a7c-4cc3-8a62-2fa0abff17e1, event type delete
I0906 20:32:59.462986       1 publisher.go:181] Finished syncing namespace "azuredisk-7463" (1.273387ms)
I0906 20:32:59.481448       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7463/default), service account deleted, removing tokens
I0906 20:32:59.481602       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7463, name default, uid d1cb9d6d-4c9a-445c-a92b-315ad2a3de88, event type delete
I0906 20:32:59.481714       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7463" (2.3µs)
I0906 20:32:59.493928       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7463" (1.7µs)
... skipping 323 lines ...
I0906 20:33:35.360613       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]]
I0906 20:33:35.360622       1 pv_controller.go:1764] operation "delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]" is already running, skipping
I0906 20:33:35.360661       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2] started
I0906 20:33:35.359940       1 pv_protection_controller.go:205] Got event on PV pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2
I0906 20:33:35.362245       1 pv_controller.go:1341] isVolumeReleased[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is released
I0906 20:33:35.362378       1 pv_controller.go:1405] doDeleteVolume [pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]
I0906 20:33:35.398218       1 pv_controller.go:1260] deletion of volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:33:35.398398       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: set phase Failed
I0906 20:33:35.398481       1 pv_controller.go:858] updating PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: set phase Failed
I0906 20:33:35.402677       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" with version 2702
I0906 20:33:35.403191       1 pv_controller.go:879] volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" entered phase "Failed"
I0906 20:33:35.403323       1 pv_controller.go:901] volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:33:35.403453       1 goroutinemap.go:150] Operation for "delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]" failed. No retries permitted until 2022-09-06 20:33:35.903419491 +0000 UTC m=+920.428059212 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:33:35.403766       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" with version 2702
I0906 20:33:35.403873       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: phase: Failed, bound to: "azuredisk-9241/pvc-xhwjr (uid: 5f0ed6ef-718f-4d45-b7cf-7afb103422b2)", boundByController: true
I0906 20:33:35.403976       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is bound to claim azuredisk-9241/pvc-xhwjr
I0906 20:33:35.404077       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: claim azuredisk-9241/pvc-xhwjr not found
I0906 20:33:35.404169       1 pv_controller.go:1108] reclaimVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: policy is Delete
I0906 20:33:35.404261       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]]
I0906 20:33:35.404343       1 pv_controller.go:1766] operation "delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]" postponed due to exponential backoff
I0906 20:33:35.404421       1 pv_protection_controller.go:205] Got event on PV pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2
... skipping 45 lines ...
I0906 20:33:42.766994       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is bound to claim azuredisk-9241/pvc-g9bdz
I0906 20:33:42.767012       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: claim azuredisk-9241/pvc-g9bdz found: phase: Bound, bound to: "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3", bindCompleted: true, boundByController: true
I0906 20:33:42.767044       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: all is bound
I0906 20:33:42.767057       1 pv_controller.go:858] updating PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: set phase Bound
I0906 20:33:42.767078       1 pv_controller.go:861] updating PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: phase Bound already set
I0906 20:33:42.767095       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" with version 2702
I0906 20:33:42.767113       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: phase: Failed, bound to: "azuredisk-9241/pvc-xhwjr (uid: 5f0ed6ef-718f-4d45-b7cf-7afb103422b2)", boundByController: true
I0906 20:33:42.767134       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is bound to claim azuredisk-9241/pvc-xhwjr
I0906 20:33:42.767171       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: claim azuredisk-9241/pvc-xhwjr not found
I0906 20:33:42.767181       1 pv_controller.go:1108] reclaimVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: policy is Delete
I0906 20:33:42.767202       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]]
I0906 20:33:42.767245       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2] started
I0906 20:33:42.767507       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9241/pvc-x9gn8" with version 2601
... skipping 27 lines ...
I0906 20:33:42.769719       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9241/pvc-g9bdz] status: phase Bound already set
I0906 20:33:42.769818       1 pv_controller.go:1038] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" bound to claim "azuredisk-9241/pvc-g9bdz"
I0906 20:33:42.769913       1 pv_controller.go:1039] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" status after binding: phase: Bound, bound to: "azuredisk-9241/pvc-g9bdz (uid: 4d2297e7-323f-4b05-9329-c34e9070c3a3)", boundByController: true
I0906 20:33:42.770029       1 pv_controller.go:1040] claim "azuredisk-9241/pvc-g9bdz" status after binding: phase: Bound, bound to: "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3", bindCompleted: true, boundByController: true
I0906 20:33:42.770924       1 pv_controller.go:1341] isVolumeReleased[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is released
I0906 20:33:42.770939       1 pv_controller.go:1405] doDeleteVolume [pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]
I0906 20:33:42.771016       1 pv_controller.go:1260] deletion of volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2) since it's in attaching or detaching state
I0906 20:33:42.771060       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: set phase Failed
I0906 20:33:42.771077       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: phase Failed already set
E0906 20:33:42.771150       1 goroutinemap.go:150] Operation for "delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]" failed. No retries permitted until 2022-09-06 20:33:43.771099323 +0000 UTC m=+928.295739044 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2) since it's in attaching or detaching state"
I0906 20:33:44.409837       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="69.899µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:42146" resp=200
I0906 20:33:46.000404       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-r9b4kg" (18.5µs)
I0906 20:33:46.000441       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-zj5cwf" (4.9µs)
I0906 20:33:47.646458       1 gc_controller.go:161] GC'ing orphaned
I0906 20:33:47.646512       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:33:51.383634       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 54 lines ...
I0906 20:33:57.768642       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-9241/pvc-g9bdz]: already bound to "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3"
I0906 20:33:57.768653       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9241/pvc-g9bdz] status: set phase Bound
I0906 20:33:57.768669       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9241/pvc-g9bdz] status: phase Bound already set
I0906 20:33:57.768681       1 pv_controller.go:1038] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" bound to claim "azuredisk-9241/pvc-g9bdz"
I0906 20:33:57.768697       1 pv_controller.go:1039] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" status after binding: phase: Bound, bound to: "azuredisk-9241/pvc-g9bdz (uid: 4d2297e7-323f-4b05-9329-c34e9070c3a3)", boundByController: true
I0906 20:33:57.768713       1 pv_controller.go:1040] claim "azuredisk-9241/pvc-g9bdz" status after binding: phase: Bound, bound to: "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3", bindCompleted: true, boundByController: true
I0906 20:33:57.768833       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: phase: Failed, bound to: "azuredisk-9241/pvc-xhwjr (uid: 5f0ed6ef-718f-4d45-b7cf-7afb103422b2)", boundByController: true
I0906 20:33:57.768873       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is bound to claim azuredisk-9241/pvc-xhwjr
I0906 20:33:57.768892       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: claim azuredisk-9241/pvc-xhwjr not found
I0906 20:33:57.768914       1 pv_controller.go:1108] reclaimVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: policy is Delete
I0906 20:33:57.768929       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]]
I0906 20:33:57.768961       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2] started
I0906 20:33:57.774426       1 pv_controller.go:1341] isVolumeReleased[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is released
... skipping 5 lines ...
I0906 20:34:01.016044       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-zj5cwf" (15.391043ms)
I0906 20:34:01.016113       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-zj5cwf, uid 331afd8a-eeac-4dab-82fb-276ddab7d298, event type delete
I0906 20:34:02.984259       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2
I0906 20:34:02.984558       1 pv_controller.go:1436] volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" deleted
I0906 20:34:02.984710       1 pv_controller.go:1284] deleteVolumeOperation [pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: success
I0906 20:34:02.989887       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2" with version 2749
I0906 20:34:02.989928       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: phase: Failed, bound to: "azuredisk-9241/pvc-xhwjr (uid: 5f0ed6ef-718f-4d45-b7cf-7afb103422b2)", boundByController: true
I0906 20:34:02.989982       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: volume is bound to claim azuredisk-9241/pvc-xhwjr
I0906 20:34:02.990046       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: claim azuredisk-9241/pvc-xhwjr not found
I0906 20:34:02.990062       1 pv_controller.go:1108] reclaimVolume[pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2]: policy is Delete
I0906 20:34:02.990091       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]]
I0906 20:34:02.990100       1 pv_controller.go:1764] operation "delete-pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2[5c318e20-6e82-40fd-9dc2-dfc83bac1195]" is already running, skipping
I0906 20:34:02.990160       1 pv_protection_controller.go:205] Got event on PV pvc-5f0ed6ef-718f-4d45-b7cf-7afb103422b2
... skipping 48 lines ...
I0906 20:34:06.164033       1 pv_controller.go:1108] reclaimVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: policy is Delete
I0906 20:34:06.164043       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]]
I0906 20:34:06.164050       1 pv_controller.go:1764] operation "delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]" is already running, skipping
I0906 20:34:06.163986       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3] started
I0906 20:34:06.166268       1 pv_controller.go:1341] isVolumeReleased[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is released
I0906 20:34:06.166291       1 pv_controller.go:1405] doDeleteVolume [pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]
I0906 20:34:06.199897       1 pv_controller.go:1260] deletion of volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:34:06.199919       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: set phase Failed
I0906 20:34:06.199929       1 pv_controller.go:858] updating PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: set phase Failed
I0906 20:34:06.203143       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" with version 2760
I0906 20:34:06.203631       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: phase: Failed, bound to: "azuredisk-9241/pvc-g9bdz (uid: 4d2297e7-323f-4b05-9329-c34e9070c3a3)", boundByController: true
I0906 20:34:06.203679       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is bound to claim azuredisk-9241/pvc-g9bdz
I0906 20:34:06.203699       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: claim azuredisk-9241/pvc-g9bdz not found
I0906 20:34:06.203744       1 pv_controller.go:1108] reclaimVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: policy is Delete
I0906 20:34:06.203790       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]]
I0906 20:34:06.203840       1 pv_controller.go:1764] operation "delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]" is already running, skipping
I0906 20:34:06.203449       1 pv_protection_controller.go:205] Got event on PV pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3
I0906 20:34:06.204555       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" with version 2760
I0906 20:34:06.204579       1 pv_controller.go:879] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" entered phase "Failed"
I0906 20:34:06.204588       1 pv_controller.go:901] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:34:06.204635       1 goroutinemap.go:150] Operation for "delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]" failed. No retries permitted until 2022-09-06 20:34:06.704605581 +0000 UTC m=+951.229245302 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:34:06.204861       1 event.go:291] "Event occurred" object="pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:34:07.608038       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 14 items received
I0906 20:34:07.647224       1 gc_controller.go:161] GC'ing orphaned
I0906 20:34:07.647249       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:34:07.657408       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-6f4d74be-c1f6-4ae1-837d-34355934e641) returned with <nil>
I0906 20:34:07.657615       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-hx45zt-mp-0000001, refreshing the cache
... skipping 17 lines ...
I0906 20:34:12.768356       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f4d74be-c1f6-4ae1-837d-34355934e641]: volume is bound to claim azuredisk-9241/pvc-x9gn8
I0906 20:34:12.768396       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-6f4d74be-c1f6-4ae1-837d-34355934e641]: claim azuredisk-9241/pvc-x9gn8 found: phase: Bound, bound to: "pvc-6f4d74be-c1f6-4ae1-837d-34355934e641", bindCompleted: true, boundByController: true
I0906 20:34:12.768469       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-6f4d74be-c1f6-4ae1-837d-34355934e641]: all is bound
I0906 20:34:12.768499       1 pv_controller.go:858] updating PersistentVolume[pvc-6f4d74be-c1f6-4ae1-837d-34355934e641]: set phase Bound
I0906 20:34:12.768550       1 pv_controller.go:861] updating PersistentVolume[pvc-6f4d74be-c1f6-4ae1-837d-34355934e641]: phase Bound already set
I0906 20:34:12.768617       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" with version 2760
I0906 20:34:12.768676       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: phase: Failed, bound to: "azuredisk-9241/pvc-g9bdz (uid: 4d2297e7-323f-4b05-9329-c34e9070c3a3)", boundByController: true
I0906 20:34:12.768740       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is bound to claim azuredisk-9241/pvc-g9bdz
I0906 20:34:12.768849       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: claim azuredisk-9241/pvc-g9bdz not found
I0906 20:34:12.768905       1 pv_controller.go:1108] reclaimVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: policy is Delete
I0906 20:34:12.768926       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]]
I0906 20:34:12.768957       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3] started
I0906 20:34:12.768286       1 pv_controller.go:861] updating PersistentVolume[pvc-6f4d74be-c1f6-4ae1-837d-34355934e641]: phase Bound already set
... skipping 3 lines ...
I0906 20:34:12.769314       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9241/pvc-x9gn8] status: phase Bound already set
I0906 20:34:12.769329       1 pv_controller.go:1038] volume "pvc-6f4d74be-c1f6-4ae1-837d-34355934e641" bound to claim "azuredisk-9241/pvc-x9gn8"
I0906 20:34:12.769353       1 pv_controller.go:1039] volume "pvc-6f4d74be-c1f6-4ae1-837d-34355934e641" status after binding: phase: Bound, bound to: "azuredisk-9241/pvc-x9gn8 (uid: 6f4d74be-c1f6-4ae1-837d-34355934e641)", boundByController: true
I0906 20:34:12.769390       1 pv_controller.go:1040] claim "azuredisk-9241/pvc-x9gn8" status after binding: phase: Bound, bound to: "pvc-6f4d74be-c1f6-4ae1-837d-34355934e641", bindCompleted: true, boundByController: true
I0906 20:34:12.774579       1 pv_controller.go:1341] isVolumeReleased[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is released
I0906 20:34:12.774598       1 pv_controller.go:1405] doDeleteVolume [pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]
I0906 20:34:12.774632       1 pv_controller.go:1260] deletion of volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) since it's in attaching or detaching state
I0906 20:34:12.774648       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: set phase Failed
I0906 20:34:12.774657       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: phase Failed already set
E0906 20:34:12.774701       1 goroutinemap.go:150] Operation for "delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]" failed. No retries permitted until 2022-09-06 20:34:13.774674008 +0000 UTC m=+958.299313729 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) since it's in attaching or detaching state"
I0906 20:34:14.410201       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="149.398µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:38264" resp=200
I0906 20:34:15.212285       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0906 20:34:22.619254       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0906 20:34:22.946475       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) returned with <nil>
I0906 20:34:22.946527       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3) succeeded
I0906 20:34:22.946538       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3 was detached from node:capz-hx45zt-mp-0000001
... skipping 4 lines ...
I0906 20:34:27.647673       1 gc_controller.go:161] GC'ing orphaned
I0906 20:34:27.647698       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:34:27.674223       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:34:27.768594       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:34:27.768711       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9241/pvc-x9gn8" with version 2601
I0906 20:34:27.768650       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" with version 2760
I0906 20:34:27.768905       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: phase: Failed, bound to: "azuredisk-9241/pvc-g9bdz (uid: 4d2297e7-323f-4b05-9329-c34e9070c3a3)", boundByController: true
I0906 20:34:27.768788       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-9241/pvc-x9gn8]: phase: Bound, bound to: "pvc-6f4d74be-c1f6-4ae1-837d-34355934e641", bindCompleted: true, boundByController: true
I0906 20:34:27.769056       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-9241/pvc-x9gn8]: volume "pvc-6f4d74be-c1f6-4ae1-837d-34355934e641" found: phase: Bound, bound to: "azuredisk-9241/pvc-x9gn8 (uid: 6f4d74be-c1f6-4ae1-837d-34355934e641)", boundByController: true
I0906 20:34:27.769074       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-9241/pvc-x9gn8]: claim is already correctly bound
I0906 20:34:27.769100       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is bound to claim azuredisk-9241/pvc-g9bdz
I0906 20:34:27.769164       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: claim azuredisk-9241/pvc-g9bdz not found
I0906 20:34:27.769233       1 pv_controller.go:1108] reclaimVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: policy is Delete
... skipping 25 lines ...
I0906 20:34:32.349782       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 21 items received
I0906 20:34:32.767681       1 node_lifecycle_controller.go:1047] Node capz-hx45zt-control-plane-nhv8k ReadyCondition updated. Updating timestamp.
I0906 20:34:32.997480       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3
I0906 20:34:32.997694       1 pv_controller.go:1436] volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" deleted
I0906 20:34:32.997799       1 pv_controller.go:1284] deleteVolumeOperation [pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: success
I0906 20:34:33.003210       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3" with version 2800
I0906 20:34:33.003244       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: phase: Failed, bound to: "azuredisk-9241/pvc-g9bdz (uid: 4d2297e7-323f-4b05-9329-c34e9070c3a3)", boundByController: true
I0906 20:34:33.003391       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: volume is bound to claim azuredisk-9241/pvc-g9bdz
I0906 20:34:33.003414       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: claim azuredisk-9241/pvc-g9bdz not found
I0906 20:34:33.003580       1 pv_controller.go:1108] reclaimVolume[pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3]: policy is Delete
I0906 20:34:33.003601       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3[0195b817-9550-4b4a-a921-492766754864]]
I0906 20:34:33.003663       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3] started
I0906 20:34:33.003891       1 pv_protection_controller.go:205] Got event on PV pvc-4d2297e7-323f-4b05-9329-c34e9070c3a3
... skipping 459 lines ...
I0906 20:35:11.756902       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: claim azuredisk-9336/pvc-tdg69 not found
I0906 20:35:11.756994       1 pv_controller.go:1108] reclaimVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: policy is Delete
I0906 20:35:11.757082       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]]
I0906 20:35:11.757111       1 pv_controller.go:1764] operation "delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]" is already running, skipping
I0906 20:35:11.758592       1 pv_controller.go:1341] isVolumeReleased[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is released
I0906 20:35:11.758669       1 pv_controller.go:1405] doDeleteVolume [pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]
I0906 20:35:11.779465       1 pv_controller.go:1260] deletion of volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:35:11.779489       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: set phase Failed
I0906 20:35:11.779500       1 pv_controller.go:858] updating PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: set phase Failed
I0906 20:35:11.782860       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" with version 2933
I0906 20:35:11.783435       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase: Failed, bound to: "azuredisk-9336/pvc-tdg69 (uid: 3ee152f0-de84-4486-b80c-c0ea5c264b50)", boundByController: true
I0906 20:35:11.783632       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is bound to claim azuredisk-9336/pvc-tdg69
I0906 20:35:11.783656       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: claim azuredisk-9336/pvc-tdg69 not found
I0906 20:35:11.783512       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" with version 2933
I0906 20:35:11.783707       1 pv_controller.go:879] volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" entered phase "Failed"
I0906 20:35:11.783721       1 pv_controller.go:901] volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:35:11.783789       1 goroutinemap.go:150] Operation for "delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]" failed. No retries permitted until 2022-09-06 20:35:12.28374236 +0000 UTC m=+1016.808382081 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:35:11.783533       1 pv_protection_controller.go:205] Got event on PV pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50
I0906 20:35:11.784054       1 event.go:291] "Event occurred" object="pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:35:11.784112       1 pv_controller.go:1108] reclaimVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: policy is Delete
I0906 20:35:11.784129       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]]
I0906 20:35:11.784139       1 pv_controller.go:1766] operation "delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]" postponed due to exponential backoff
I0906 20:35:12.676112       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:35:12.770485       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:35:12.770593       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" with version 2933
I0906 20:35:12.770620       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-vdn6w" with version 2860
I0906 20:35:12.770636       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-9336/pvc-vdn6w]: phase: Bound, bound to: "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f", bindCompleted: true, boundByController: true
I0906 20:35:12.770647       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase: Failed, bound to: "azuredisk-9336/pvc-tdg69 (uid: 3ee152f0-de84-4486-b80c-c0ea5c264b50)", boundByController: true
I0906 20:35:12.770669       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-vdn6w]: volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" found: phase: Bound, bound to: "azuredisk-9336/pvc-vdn6w (uid: 44659c5c-0c28-44bc-bc26-b43c8016408f)", boundByController: true
I0906 20:35:12.770671       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is bound to claim azuredisk-9336/pvc-tdg69
I0906 20:35:12.770678       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-vdn6w]: claim is already correctly bound
I0906 20:35:12.770687       1 pv_controller.go:1012] binding volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" to claim "azuredisk-9336/pvc-vdn6w"
I0906 20:35:12.770690       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: claim azuredisk-9336/pvc-tdg69 not found
I0906 20:35:12.770696       1 pv_controller.go:910] updating PersistentVolume[pvc-44659c5c-0c28-44bc-bc26-b43c8016408f]: binding to "azuredisk-9336/pvc-vdn6w"
... skipping 16 lines ...
I0906 20:35:12.770822       1 pv_controller.go:1039] volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-vdn6w (uid: 44659c5c-0c28-44bc-bc26-b43c8016408f)", boundByController: true
I0906 20:35:12.770835       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-vdn6w" status after binding: phase: Bound, bound to: "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f", bindCompleted: true, boundByController: true
I0906 20:35:12.770788       1 pv_controller.go:858] updating PersistentVolume[pvc-44659c5c-0c28-44bc-bc26-b43c8016408f]: set phase Bound
I0906 20:35:12.770848       1 pv_controller.go:861] updating PersistentVolume[pvc-44659c5c-0c28-44bc-bc26-b43c8016408f]: phase Bound already set
I0906 20:35:12.775161       1 pv_controller.go:1341] isVolumeReleased[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is released
I0906 20:35:12.775179       1 pv_controller.go:1405] doDeleteVolume [pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]
I0906 20:35:12.796102       1 pv_controller.go:1260] deletion of volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:35:12.796124       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: set phase Failed
I0906 20:35:12.796132       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase Failed already set
E0906 20:35:12.796186       1 goroutinemap.go:150] Operation for "delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]" failed. No retries permitted until 2022-09-06 20:35:13.796139762 +0000 UTC m=+1018.320779483 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:35:14.345468       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:35:14.410490       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="68µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46500" resp=200
I0906 20:35:17.244330       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-hx45zt-mp-0000001"
I0906 20:35:17.244361       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 to the node "capz-hx45zt-mp-0000001" mounted false
I0906 20:35:17.244372       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-44659c5c-0c28-44bc-bc26-b43c8016408f to the node "capz-hx45zt-mp-0000001" mounted false
I0906 20:35:17.333393       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-44659c5c-0c28-44bc-bc26-b43c8016408f\"}]}}" for node "capz-hx45zt-mp-0000001" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-44659c5c-0c28-44bc-bc26-b43c8016408f 1}]
... skipping 20 lines ...
I0906 20:35:27.617300       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:35:27.649600       1 gc_controller.go:161] GC'ing orphaned
I0906 20:35:27.649626       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:35:27.677019       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:35:27.771284       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:35:27.771352       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" with version 2933
I0906 20:35:27.771411       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase: Failed, bound to: "azuredisk-9336/pvc-tdg69 (uid: 3ee152f0-de84-4486-b80c-c0ea5c264b50)", boundByController: true
I0906 20:35:27.771477       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is bound to claim azuredisk-9336/pvc-tdg69
I0906 20:35:27.771527       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: claim azuredisk-9336/pvc-tdg69 not found
I0906 20:35:27.771557       1 pv_controller.go:1108] reclaimVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: policy is Delete
I0906 20:35:27.771617       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]]
I0906 20:35:27.771709       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" with version 2858
I0906 20:35:27.771781       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50] started
... skipping 18 lines ...
I0906 20:35:27.773461       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-vdn6w] status: phase Bound already set
I0906 20:35:27.773480       1 pv_controller.go:1038] volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" bound to claim "azuredisk-9336/pvc-vdn6w"
I0906 20:35:27.773543       1 pv_controller.go:1039] volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-vdn6w (uid: 44659c5c-0c28-44bc-bc26-b43c8016408f)", boundByController: true
I0906 20:35:27.773590       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-vdn6w" status after binding: phase: Bound, bound to: "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f", bindCompleted: true, boundByController: true
I0906 20:35:27.778642       1 pv_controller.go:1341] isVolumeReleased[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is released
I0906 20:35:27.778658       1 pv_controller.go:1405] doDeleteVolume [pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]
I0906 20:35:27.778711       1 pv_controller.go:1260] deletion of volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) since it's in attaching or detaching state
I0906 20:35:27.778792       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: set phase Failed
I0906 20:35:27.778831       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase Failed already set
E0906 20:35:27.778892       1 goroutinemap.go:150] Operation for "delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]" failed. No retries permitted until 2022-09-06 20:35:29.778864461 +0000 UTC m=+1034.303504182 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) since it's in attaching or detaching state"
I0906 20:35:28.442574       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:35:28.713444       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0906 20:35:32.751014       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) returned with <nil>
I0906 20:35:32.751062       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50) succeeded
I0906 20:35:32.751071       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50 was detached from node:capz-hx45zt-mp-0000001
I0906 20:35:32.751094       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50") on node "capz-hx45zt-mp-0000001" 
I0906 20:35:32.751134       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-hx45zt-mp-0000001, refreshing the cache
I0906 20:35:32.790108       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-44659c5c-0c28-44bc-bc26-b43c8016408f"
I0906 20:35:32.790140       1 azure_controller_vmss.go:175] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-44659c5c-0c28-44bc-bc26-b43c8016408f)
I0906 20:35:34.410680       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="123.799µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36334" resp=200
I0906 20:35:42.677920       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:35:42.772322       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:35:42.772390       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" with version 2933
I0906 20:35:42.772425       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase: Failed, bound to: "azuredisk-9336/pvc-tdg69 (uid: 3ee152f0-de84-4486-b80c-c0ea5c264b50)", boundByController: true
I0906 20:35:42.772459       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is bound to claim azuredisk-9336/pvc-tdg69
I0906 20:35:42.772475       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: claim azuredisk-9336/pvc-tdg69 not found
I0906 20:35:42.772484       1 pv_controller.go:1108] reclaimVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: policy is Delete
I0906 20:35:42.772498       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]]
I0906 20:35:42.772513       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-44659c5c-0c28-44bc-bc26-b43c8016408f" with version 2858
I0906 20:35:42.772528       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-44659c5c-0c28-44bc-bc26-b43c8016408f]: phase: Bound, bound to: "azuredisk-9336/pvc-vdn6w (uid: 44659c5c-0c28-44bc-bc26-b43c8016408f)", boundByController: true
... skipping 26 lines ...
I0906 20:35:47.649785       1 gc_controller.go:161] GC'ing orphaned
I0906 20:35:47.649845       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:35:47.966322       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50
I0906 20:35:47.966371       1 pv_controller.go:1436] volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" deleted
I0906 20:35:47.966416       1 pv_controller.go:1284] deleteVolumeOperation [pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: success
I0906 20:35:47.987766       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50" with version 2988
I0906 20:35:47.987816       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: phase: Failed, bound to: "azuredisk-9336/pvc-tdg69 (uid: 3ee152f0-de84-4486-b80c-c0ea5c264b50)", boundByController: true
I0906 20:35:47.988002       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: volume is bound to claim azuredisk-9336/pvc-tdg69
I0906 20:35:47.988159       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: claim azuredisk-9336/pvc-tdg69 not found
I0906 20:35:47.988251       1 pv_controller.go:1108] reclaimVolume[pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50]: policy is Delete
I0906 20:35:47.988275       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50[2c19b01b-f1b2-4101-9894-4a5da9e1f495]]
I0906 20:35:47.988368       1 pv_protection_controller.go:205] Got event on PV pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50
I0906 20:35:47.988429       1 pv_protection_controller.go:125] Processing PV pvc-3ee152f0-de84-4486-b80c-c0ea5c264b50
... skipping 377 lines ...
I0906 20:36:08.929462       1 pv_controller.go:1038] volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" bound to claim "azuredisk-8591/pvc-6v5z9"
I0906 20:36:08.929499       1 pv_controller.go:1039] volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" status after binding: phase: Bound, bound to: "azuredisk-8591/pvc-6v5z9 (uid: 84da95c0-4a27-479a-9043-84375f142a41)", boundByController: true
I0906 20:36:08.929522       1 pv_controller.go:1040] claim "azuredisk-8591/pvc-6v5z9" status after binding: phase: Bound, bound to: "pvc-84da95c0-4a27-479a-9043-84375f142a41", bindCompleted: true, boundByController: true
I0906 20:36:08.929777       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-8591/pvc-6v5z9"
I0906 20:36:09.146357       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2205
I0906 20:36:09.165605       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2205, name default-token-4cjbw, uid 2434565d-4bd9-4b59-9778-f422b81239b6, event type delete
E0906 20:36:09.186332       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2205/default: secrets "default-token-kj8mc" is forbidden: unable to create new content in namespace azuredisk-2205 because it is being terminated
I0906 20:36:09.201038       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2205, name kube-root-ca.crt, uid 340b611d-b7ea-45ba-a49d-013eb4de1602, event type delete
I0906 20:36:09.203729       1 publisher.go:181] Finished syncing namespace "azuredisk-2205" (2.644673ms)
I0906 20:36:09.218886       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2205/default), service account deleted, removing tokens
I0906 20:36:09.219482       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2205, name default, uid b8543a2a-37a3-4728-9047-bb4b2505cd7f, event type delete
I0906 20:36:09.219508       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2205" (3.2µs)
I0906 20:36:09.314275       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2205" (2.8µs)
... skipping 391 lines ...
I0906 20:36:44.050646       1 pv_controller.go:1108] reclaimVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: policy is Delete
I0906 20:36:44.050660       1 pv_controller.go:1753] scheduleOperation[delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]]
I0906 20:36:44.050668       1 pv_controller.go:1764] operation "delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]" is already running, skipping
I0906 20:36:44.050682       1 pv_protection_controller.go:205] Got event on PV pvc-84da95c0-4a27-479a-9043-84375f142a41
I0906 20:36:44.056359       1 pv_controller.go:1341] isVolumeReleased[pvc-84da95c0-4a27-479a-9043-84375f142a41]: volume is released
I0906 20:36:44.056475       1 pv_controller.go:1405] doDeleteVolume [pvc-84da95c0-4a27-479a-9043-84375f142a41]
I0906 20:36:44.078168       1 pv_controller.go:1260] deletion of volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
I0906 20:36:44.078188       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-84da95c0-4a27-479a-9043-84375f142a41]: set phase Failed
I0906 20:36:44.078198       1 pv_controller.go:858] updating PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: set phase Failed
I0906 20:36:44.086721       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" with version 3182
I0906 20:36:44.086750       1 pv_controller.go:879] volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" entered phase "Failed"
I0906 20:36:44.086761       1 pv_controller.go:901] volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted
E0906 20:36:44.086812       1 goroutinemap.go:150] Operation for "delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]" failed. No retries permitted until 2022-09-06 20:36:44.586781811 +0000 UTC m=+1109.111421432 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:36:44.087069       1 event.go:291] "Event occurred" object="pvc-84da95c0-4a27-479a-9043-84375f142a41" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/virtualMachineScaleSets/capz-hx45zt-mp-0/virtualMachines/capz-hx45zt-mp-0_1), could not be deleted"
I0906 20:36:44.087191       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" with version 3182
I0906 20:36:44.087216       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: phase: Failed, bound to: "azuredisk-8591/pvc-6v5z9 (uid: 84da95c0-4a27-479a-9043-84375f142a41)", boundByController: true
I0906 20:36:44.087240       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: volume is bound to claim azuredisk-8591/pvc-6v5z9
I0906 20:36:44.087259       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: claim azuredisk-8591/pvc-6v5z9 not found
I0906 20:36:44.087266       1 pv_controller.go:1108] reclaimVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: policy is Delete
I0906 20:36:44.087281       1 pv_controller.go:1753] scheduleOperation[delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]]
I0906 20:36:44.087289       1 pv_controller.go:1766] operation "delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]" postponed due to exponential backoff
I0906 20:36:44.087304       1 pv_protection_controller.go:205] Got event on PV pvc-84da95c0-4a27-479a-9043-84375f142a41
... skipping 53 lines ...
I0906 20:36:57.776772       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac]: volume is bound to claim azuredisk-8591/pvc-87cfm
I0906 20:36:57.776786       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac]: claim azuredisk-8591/pvc-87cfm found: phase: Bound, bound to: "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac", bindCompleted: true, boundByController: true
I0906 20:36:57.776797       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac]: all is bound
I0906 20:36:57.776805       1 pv_controller.go:858] updating PersistentVolume[pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac]: set phase Bound
I0906 20:36:57.776813       1 pv_controller.go:861] updating PersistentVolume[pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac]: phase Bound already set
I0906 20:36:57.776824       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" with version 3182
I0906 20:36:57.776841       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: phase: Failed, bound to: "azuredisk-8591/pvc-6v5z9 (uid: 84da95c0-4a27-479a-9043-84375f142a41)", boundByController: true
I0906 20:36:57.776858       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: volume is bound to claim azuredisk-8591/pvc-6v5z9
I0906 20:36:57.776879       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: claim azuredisk-8591/pvc-6v5z9 not found
I0906 20:36:57.776888       1 pv_controller.go:1108] reclaimVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: policy is Delete
I0906 20:36:57.776903       1 pv_controller.go:1753] scheduleOperation[delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]]
I0906 20:36:57.776929       1 pv_controller.go:1232] deleteVolumeOperation [pvc-84da95c0-4a27-479a-9043-84375f142a41] started
I0906 20:36:57.777076       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8591/pvc-87cfm" with version 3088
... skipping 27 lines ...
I0906 20:36:57.779681       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8591/pvc-xfpcn] status: phase Bound already set
I0906 20:36:57.779782       1 pv_controller.go:1038] volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" bound to claim "azuredisk-8591/pvc-xfpcn"
I0906 20:36:57.779893       1 pv_controller.go:1039] volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" status after binding: phase: Bound, bound to: "azuredisk-8591/pvc-xfpcn (uid: beaa455d-79a3-4471-9257-62806b6802c6)", boundByController: true
I0906 20:36:57.780002       1 pv_controller.go:1040] claim "azuredisk-8591/pvc-xfpcn" status after binding: phase: Bound, bound to: "pvc-beaa455d-79a3-4471-9257-62806b6802c6", bindCompleted: true, boundByController: true
I0906 20:36:57.783043       1 pv_controller.go:1341] isVolumeReleased[pvc-84da95c0-4a27-479a-9043-84375f142a41]: volume is released
I0906 20:36:57.783060       1 pv_controller.go:1405] doDeleteVolume [pvc-84da95c0-4a27-479a-9043-84375f142a41]
I0906 20:36:57.783092       1 pv_controller.go:1260] deletion of volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) since it's in attaching or detaching state
I0906 20:36:57.783105       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-84da95c0-4a27-479a-9043-84375f142a41]: set phase Failed
I0906 20:36:57.783115       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-84da95c0-4a27-479a-9043-84375f142a41]: phase Failed already set
E0906 20:36:57.783150       1 goroutinemap.go:150] Operation for "delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]" failed. No retries permitted until 2022-09-06 20:36:58.783124088 +0000 UTC m=+1123.307763809 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) since it's in attaching or detaching state"
I0906 20:36:58.484103       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:37:02.738420       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) returned with <nil>
I0906 20:37:02.738470       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41) succeeded
I0906 20:37:02.738481       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41 was detached from node:capz-hx45zt-mp-0000001
I0906 20:37:02.738505       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41") on node "capz-hx45zt-mp-0000001" 
I0906 20:37:02.738554       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-hx45zt-mp-0000001, refreshing the cache
... skipping 5 lines ...
I0906 20:37:12.681906       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:12.715213       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
I0906 20:37:12.776671       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:37:12.776735       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" with version 3182
I0906 20:37:12.776735       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8591/pvc-xfpcn" with version 3083
I0906 20:37:12.776760       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8591/pvc-xfpcn]: phase: Bound, bound to: "pvc-beaa455d-79a3-4471-9257-62806b6802c6", bindCompleted: true, boundByController: true
I0906 20:37:12.776765       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: phase: Failed, bound to: "azuredisk-8591/pvc-6v5z9 (uid: 84da95c0-4a27-479a-9043-84375f142a41)", boundByController: true
I0906 20:37:12.776789       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: volume is bound to claim azuredisk-8591/pvc-6v5z9
I0906 20:37:12.776793       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-8591/pvc-xfpcn]: volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" found: phase: Bound, bound to: "azuredisk-8591/pvc-xfpcn (uid: beaa455d-79a3-4471-9257-62806b6802c6)", boundByController: true
I0906 20:37:12.776801       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-8591/pvc-xfpcn]: claim is already correctly bound
I0906 20:37:12.776807       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: claim azuredisk-8591/pvc-6v5z9 not found
I0906 20:37:12.776810       1 pv_controller.go:1012] binding volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" to claim "azuredisk-8591/pvc-xfpcn"
I0906 20:37:12.776814       1 pv_controller.go:1108] reclaimVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: policy is Delete
... skipping 47 lines ...
I0906 20:37:18.055954       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-84da95c0-4a27-479a-9043-84375f142a41
I0906 20:37:18.055985       1 pv_controller.go:1436] volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" deleted
I0906 20:37:18.056000       1 pv_controller.go:1284] deleteVolumeOperation [pvc-84da95c0-4a27-479a-9043-84375f142a41]: success
I0906 20:37:18.062753       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" with version 3235
I0906 20:37:18.062995       1 pv_protection_controller.go:205] Got event on PV pvc-84da95c0-4a27-479a-9043-84375f142a41
I0906 20:37:18.063075       1 pv_protection_controller.go:125] Processing PV pvc-84da95c0-4a27-479a-9043-84375f142a41
I0906 20:37:18.063188       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: phase: Failed, bound to: "azuredisk-8591/pvc-6v5z9 (uid: 84da95c0-4a27-479a-9043-84375f142a41)", boundByController: true
I0906 20:37:18.063327       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: volume is bound to claim azuredisk-8591/pvc-6v5z9
I0906 20:37:18.063463       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: claim azuredisk-8591/pvc-6v5z9 not found
I0906 20:37:18.063552       1 pv_controller.go:1108] reclaimVolume[pvc-84da95c0-4a27-479a-9043-84375f142a41]: policy is Delete
I0906 20:37:18.063667       1 pv_controller.go:1753] scheduleOperation[delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]]
I0906 20:37:18.063753       1 pv_controller.go:1764] operation "delete-pvc-84da95c0-4a27-479a-9043-84375f142a41[14a45289-97e9-4ed1-bcf8-8251c3c684b3]" is already running, skipping
I0906 20:37:18.068476       1 pv_controller_base.go:235] volume "pvc-84da95c0-4a27-479a-9043-84375f142a41" deleted
... skipping 51 lines ...
I0906 20:37:19.925241       1 pv_controller.go:1108] reclaimVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: policy is Delete
I0906 20:37:19.925323       1 pv_controller.go:1753] scheduleOperation[delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]]
I0906 20:37:19.925333       1 pv_controller.go:1764] operation "delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]" is already running, skipping
I0906 20:37:19.925005       1 pv_controller.go:1232] deleteVolumeOperation [pvc-beaa455d-79a3-4471-9257-62806b6802c6] started
I0906 20:37:19.930586       1 pv_controller.go:1341] isVolumeReleased[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: volume is released
I0906 20:37:19.930603       1 pv_controller.go:1405] doDeleteVolume [pvc-beaa455d-79a3-4471-9257-62806b6802c6]
I0906 20:37:19.930679       1 pv_controller.go:1260] deletion of volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) since it's in attaching or detaching state
I0906 20:37:19.930696       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: set phase Failed
I0906 20:37:19.930733       1 pv_controller.go:858] updating PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: set phase Failed
I0906 20:37:19.933238       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" with version 3243
I0906 20:37:19.933432       1 pv_controller.go:879] volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" entered phase "Failed"
I0906 20:37:19.933659       1 pv_controller.go:901] volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) since it's in attaching or detaching state
E0906 20:37:19.933884       1 goroutinemap.go:150] Operation for "delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]" failed. No retries permitted until 2022-09-06 20:37:20.433844787 +0000 UTC m=+1144.958484408 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) since it's in attaching or detaching state"
I0906 20:37:19.934317       1 event.go:291] "Event occurred" object="pvc-beaa455d-79a3-4471-9257-62806b6802c6" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) since it's in attaching or detaching state"
I0906 20:37:19.934618       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" with version 3243
I0906 20:37:19.934631       1 pv_protection_controller.go:205] Got event on PV pvc-beaa455d-79a3-4471-9257-62806b6802c6
I0906 20:37:19.934935       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: phase: Failed, bound to: "azuredisk-8591/pvc-xfpcn (uid: beaa455d-79a3-4471-9257-62806b6802c6)", boundByController: true
I0906 20:37:19.935056       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: volume is bound to claim azuredisk-8591/pvc-xfpcn
I0906 20:37:19.935245       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: claim azuredisk-8591/pvc-xfpcn not found
I0906 20:37:19.935335       1 pv_controller.go:1108] reclaimVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: policy is Delete
I0906 20:37:19.935436       1 pv_controller.go:1753] scheduleOperation[delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]]
I0906 20:37:19.935531       1 pv_controller.go:1766] operation "delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]" postponed due to exponential backoff
I0906 20:37:24.409619       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="61.299µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48182" resp=200
I0906 20:37:27.619469       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:27.652767       1 gc_controller.go:161] GC'ing orphaned
I0906 20:37:27.652793       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:37:27.682446       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:27.776812       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:37:27.776903       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" with version 3243
I0906 20:37:27.776962       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: phase: Failed, bound to: "azuredisk-8591/pvc-xfpcn (uid: beaa455d-79a3-4471-9257-62806b6802c6)", boundByController: true
I0906 20:37:27.777009       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: volume is bound to claim azuredisk-8591/pvc-xfpcn
I0906 20:37:27.777035       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: claim azuredisk-8591/pvc-xfpcn not found
I0906 20:37:27.777047       1 pv_controller.go:1108] reclaimVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: policy is Delete
I0906 20:37:27.777076       1 pv_controller.go:1753] scheduleOperation[delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]]
I0906 20:37:27.777109       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac" with version 3086
I0906 20:37:27.777133       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac]: phase: Bound, bound to: "azuredisk-8591/pvc-87cfm (uid: 32855ef0-54eb-4bc6-9eb8-1d04f4905eac)", boundByController: true
... skipping 18 lines ...
I0906 20:37:27.777794       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8591/pvc-87cfm] status: phase Bound already set
I0906 20:37:27.777810       1 pv_controller.go:1038] volume "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac" bound to claim "azuredisk-8591/pvc-87cfm"
I0906 20:37:27.777829       1 pv_controller.go:1039] volume "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac" status after binding: phase: Bound, bound to: "azuredisk-8591/pvc-87cfm (uid: 32855ef0-54eb-4bc6-9eb8-1d04f4905eac)", boundByController: true
I0906 20:37:27.777845       1 pv_controller.go:1040] claim "azuredisk-8591/pvc-87cfm" status after binding: phase: Bound, bound to: "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac", bindCompleted: true, boundByController: true
I0906 20:37:27.782338       1 pv_controller.go:1341] isVolumeReleased[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: volume is released
I0906 20:37:27.782357       1 pv_controller.go:1405] doDeleteVolume [pvc-beaa455d-79a3-4471-9257-62806b6802c6]
I0906 20:37:27.782410       1 pv_controller.go:1260] deletion of volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) since it's in attaching or detaching state
I0906 20:37:27.782442       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: set phase Failed
I0906 20:37:27.782454       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: phase Failed already set
E0906 20:37:27.782533       1 goroutinemap.go:150] Operation for "delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]" failed. No retries permitted until 2022-09-06 20:37:28.782507015 +0000 UTC m=+1153.307146736 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) since it's in attaching or detaching state"
I0906 20:37:28.496323       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0906 20:37:33.603483       1 azure_controller_vmss.go:187] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) returned with <nil>
I0906 20:37:33.603528       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6) succeeded
I0906 20:37:33.603537       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6 was detached from node:capz-hx45zt-mp-0000001
I0906 20:37:33.603670       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6") on node "capz-hx45zt-mp-0000001" 
I0906 20:37:34.410353       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="63µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:38204" resp=200
I0906 20:37:37.608004       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 0 items received
I0906 20:37:42.683215       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0906 20:37:42.777626       1 pv_controller_base.go:528] resyncing PV controller
I0906 20:37:42.777739       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8591/pvc-87cfm" with version 3088
I0906 20:37:42.777818       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" with version 3243
I0906 20:37:42.777936       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: phase: Failed, bound to: "azuredisk-8591/pvc-xfpcn (uid: beaa455d-79a3-4471-9257-62806b6802c6)", boundByController: true
I0906 20:37:42.777820       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8591/pvc-87cfm]: phase: Bound, bound to: "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac", bindCompleted: true, boundByController: true
I0906 20:37:42.778063       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: volume is bound to claim azuredisk-8591/pvc-xfpcn
I0906 20:37:42.778169       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: claim azuredisk-8591/pvc-xfpcn not found
I0906 20:37:42.778101       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-8591/pvc-87cfm]: volume "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac" found: phase: Bound, bound to: "azuredisk-8591/pvc-87cfm (uid: 32855ef0-54eb-4bc6-9eb8-1d04f4905eac)", boundByController: true
I0906 20:37:42.778331       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-8591/pvc-87cfm]: claim is already correctly bound
I0906 20:37:42.778346       1 pv_controller.go:1012] binding volume "pvc-32855ef0-54eb-4bc6-9eb8-1d04f4905eac" to claim "azuredisk-8591/pvc-87cfm"
... skipping 24 lines ...
I0906 20:37:47.653518       1 gc_controller.go:161] GC'ing orphaned
I0906 20:37:47.653543       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:37:47.976652       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-beaa455d-79a3-4471-9257-62806b6802c6
I0906 20:37:47.976680       1 pv_controller.go:1436] volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" deleted
I0906 20:37:47.976692       1 pv_controller.go:1284] deleteVolumeOperation [pvc-beaa455d-79a3-4471-9257-62806b6802c6]: success
I0906 20:37:47.983009       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-beaa455d-79a3-4471-9257-62806b6802c6" with version 3285
I0906 20:37:47.983134       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: phase: Failed, bound to: "azuredisk-8591/pvc-xfpcn (uid: beaa455d-79a3-4471-9257-62806b6802c6)", boundByController: true
I0906 20:37:47.983218       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: volume is bound to claim azuredisk-8591/pvc-xfpcn
I0906 20:37:47.983284       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: claim azuredisk-8591/pvc-xfpcn not found
I0906 20:37:47.983298       1 pv_controller.go:1108] reclaimVolume[pvc-beaa455d-79a3-4471-9257-62806b6802c6]: policy is Delete
I0906 20:37:47.983313       1 pv_controller.go:1753] scheduleOperation[delete-pvc-beaa455d-79a3-4471-9257-62806b6802c6[fae1e360-1e8b-42e8-905c-c7b61c39aaee]]
I0906 20:37:47.983367       1 pv_controller.go:1232] deleteVolumeOperation [pvc-beaa455d-79a3-4471-9257-62806b6802c6] started
I0906 20:37:47.983015       1 pv_protection_controller.go:205] Got event on PV pvc-beaa455d-79a3-4471-9257-62806b6802c6
... skipping 118 lines ...
I0906 20:38:06.041059       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-5786/pvc-65zxf"
I0906 20:38:06.042614       1 pv_controller.go:1753] scheduleOperation[provision-azuredisk-5786/pvc-65zxf[46a1a6a7-f770-458e-bf24-b88540259010]]
I0906 20:38:06.042716       1 pv_controller.go:1764] operation "provision-azuredisk-5786/pvc-65zxf[46a1a6a7-f770-458e-bf24-b88540259010]" is already running, skipping
I0906 20:38:06.043803       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010 StorageAccountType:Standard_LRS Size:10
I0906 20:38:06.198422       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8591
I0906 20:38:06.213176       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8591, name default-token-6k5xb, uid 55504c35-5410-421c-a514-e5dfd70f1115, event type delete
E0906 20:38:06.225731       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8591/default: secrets "default-token-86vsh" is forbidden: unable to create new content in namespace azuredisk-8591 because it is being terminated
I0906 20:38:06.309422       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8591, name azuredisk-volume-tester-lmjtr.17125fd2528ad22b, uid ac2f7a13-f668-49aa-8340-916c8d5e457f, event type delete
I0906 20:38:06.312702       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8591, name azuredisk-volume-tester-lmjtr.17125fd395a09a69, uid 5757aa25-ff76-4489-9255-124f61219d23, event type delete
I0906 20:38:06.315467       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8591, name azuredisk-volume-tester-lmjtr.17125fd4d3123219, uid 977caf96-45c4-4566-8b63-85093042c4cc, event type delete
I0906 20:38:06.318101       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8591, name azuredisk-volume-tester-lmjtr.17125fd60f579c22, uid 048fc667-ccfb-4c6f-9e7f-5fc71fd5d8ae, event type delete
I0906 20:38:06.320505       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8591, name azuredisk-volume-tester-lmjtr.17125fda148d47a6, uid dda3ef8f-ff87-447f-8910-d5a2772ba25c, event type delete
I0906 20:38:06.323429       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8591, name azuredisk-volume-tester-lmjtr.17125fda16bf8459, uid 6d2a6092-2cad-469b-ad2a-347b188bdb86, event type delete
... skipping 23 lines ...
I0906 20:38:07.328004       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5894" (2.9µs)
I0906 20:38:07.337830       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5894" (160.071094ms)
I0906 20:38:07.654369       1 gc_controller.go:161] GC'ing orphaned
I0906 20:38:07.654397       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0906 20:38:08.168450       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-493
I0906 20:38:08.191327       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-493, name default-token-vzvg8, uid f778182b-ebba-498d-8535-1c6c6f73e532, event type delete
E0906 20:38:08.205830       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-493/default: secrets "default-token-z2tsl" is forbidden: unable to create new content in namespace azuredisk-493 because it is being terminated
I0906 20:38:08.284840       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-493, name kube-root-ca.crt, uid 34a9063b-fc3f-4543-91d8-73a46004a6f6, event type delete
I0906 20:38:08.289404       1 publisher.go:181] Finished syncing namespace "azuredisk-493" (4.512454ms)
I0906 20:38:08.294616       1 tokens_controller.go:252] syncServiceAccount(azuredisk-493/default), service account deleted, removing tokens
I0906 20:38:08.294700       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-493, name default, uid 9d7e9fe8-f908-4e77-8dec-bfaea85e8f5c, event type delete
I0906 20:38:08.294726       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-493" (4.1µs)
I0906 20:38:08.310018       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-493, estimate: 0, errors: <nil>
... skipping 67 lines ...
I0906 20:38:09.100179       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-46a1a6a7-f770-458e-bf24-b88540259010" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010") from node "capz-hx45zt-mp-0000000" 
I0906 20:38:09.100270       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010" to node "capz-hx45zt-mp-0000000".
I0906 20:38:09.113715       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5710
I0906 20:38:09.137637       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010" lun 0 to node "capz-hx45zt-mp-0000000".
I0906 20:38:09.137688       1 azure_controller_vmss.go:101] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000000) - attach disk(capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-46a1a6a7-f770-458e-bf24-b88540259010) with DiskEncryptionSetID()
I0906 20:38:09.166360       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5710, name default-token-vvp4q, uid 303952b4-b0da-43ff-aaab-ebd4fb1a5799, event type delete
E0906 20:38:09.177825       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5710/default: secrets "default-token-82d8b" is forbidden: unable to create new content in namespace azuredisk-5710 because it is being terminated
I0906 20:38:09.230418       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5710, name kube-root-ca.crt, uid f28f6c06-7268-4f04-8a7e-2bd0363586b5, event type delete
I0906 20:38:09.233678       1 publisher.go:181] Finished syncing namespace "azuredisk-5710" (3.218768ms)
I0906 20:38:09.253994       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5710/default), service account deleted, removing tokens
I0906 20:38:09.254365       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5710, name default, uid d9d8e308-4edb-4a57-aa27-eb1cf34f7c3a, event type delete
I0906 20:38:09.254389       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5710" (3.3µs)
I0906 20:38:09.266198       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5710, estimate: 0, errors: <nil>
... skipping 456 lines ...
I0906 20:39:38.439798       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-hx45zt-mp-0000000, refreshing the cache
I0906 20:39:38.512883       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-hx45zt-dynamic-pvc-d9594232-555f-4862-b2da-cd1824d8a49f. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d9594232-555f-4862-b2da-cd1824d8a49f" to node "capz-hx45zt-mp-0000000".
I0906 20:39:38.546262       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d9594232-555f-4862-b2da-cd1824d8a49f" lun 0 to node "capz-hx45zt-mp-0000000".
I0906 20:39:38.546302       1 azure_controller_vmss.go:101] azureDisk - update(capz-hx45zt): vm(capz-hx45zt-mp-0000000) - attach disk(capz-hx45zt-dynamic-pvc-d9594232-555f-4862-b2da-cd1824d8a49f, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-hx45zt/providers/Microsoft.Compute/disks/capz-hx45zt-dynamic-pvc-d9594232-555f-4862-b2da-cd1824d8a49f) with DiskEncryptionSetID()
I0906 20:39:39.414764       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5786
I0906 20:39:39.431113       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5786, name default-token-lwvlz, uid 681dbc3a-b90f-4b92-bf81-ae2cffb65562, event type delete
E0906 20:39:39.443160       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5786/default: secrets "default-token-kx2cf" is forbidden: unable to create new content in namespace azuredisk-5786 because it is being terminated
I0906 20:39:39.459992       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5786/default), service account deleted, removing tokens
I0906 20:39:39.460301       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5786, name default, uid 9222abd0-c880-4f8d-b1af-cc17ea46200e, event type delete
I0906 20:39:39.460522       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5786" (2.1µs)
I0906 20:39:39.474771       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5786, name kube-root-ca.crt, uid 1fdcff67-41a7-4a67-a190-c40bcf6b7e30, event type delete
I0906 20:39:39.478484       1 publisher.go:181] Finished syncing namespace "azuredisk-5786" (3.580664ms)
I0906 20:39:39.536088       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5786, name azuredisk-volume-tester-k4jcn.17125fee2ea57bff, uid e7c8e34d-015a-45b5-95ef-cfe61a33383a, event type delete
... skipping 518 lines ...
I0906 20:40:59.918289       1 publisher.go:181] Finished syncing namespace "azuredisk-4657" (8.632213ms)
I0906 20:40:59.933168       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4657" (144.41515ms)
I0906 20:41:00.399992       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1359
I0906 20:41:00.423191       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1359, name kube-root-ca.crt, uid e93cd9db-7723-410b-96f5-fa2d73cc2e8f, event type delete
I0906 20:41:00.432705       1 publisher.go:181] Finished syncing namespace "azuredisk-1359" (9.470405ms)
I0906 20:41:00.487270       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1359, name default-token-pswm5, uid 947a4f58-fe37-4448-a155-ebdb3717e966, event type delete
E0906 20:41:00.540273       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1359/default: secrets "default-token-ft255" is forbidden: unable to create new content in namespace azuredisk-1359 because it is being terminated
I0906 20:41:00.626329       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1359/default), service account deleted, removing tokens
I0906 20:41:00.626378       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1359, name default, uid 0d1628b2-9656-494f-8dea-d57d2673b6de, event type delete
I0906 20:41:00.626569       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (2.2µs)
I0906 20:41:00.633899       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1359, estimate: 0, errors: <nil>
I0906 20:41:00.634776       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (1.9µs)
I0906 20:41:00.641413       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1359" (243.657954ms)
... skipping 9 lines ...
I0906 20:41:01.174265       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-565" (164.242851ms)
2022/09/06 20:41:01 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1096.797 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-z4tv7, container manager
STEP: Dumping workload cluster default/capz-hx45zt logs
Sep  6 20:42:32.687: INFO: Collecting logs for Linux node capz-hx45zt-control-plane-nhv8k in cluster capz-hx45zt in namespace default

Sep  6 20:43:32.688: INFO: Collecting boot logs for AzureMachine capz-hx45zt-control-plane-nhv8k

Failed to get logs for machine capz-hx45zt-control-plane-8c676, cluster default/capz-hx45zt: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  6 20:43:33.775: INFO: Collecting logs for Linux node capz-hx45zt-mp-0000001 in cluster capz-hx45zt in namespace default

Sep  6 20:44:33.777: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-hx45zt-mp-0

Sep  6 20:44:34.182: INFO: Collecting logs for Linux node capz-hx45zt-mp-0000000 in cluster capz-hx45zt in namespace default

Sep  6 20:45:34.184: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-hx45zt-mp-0

Failed to get logs for machine pool capz-hx45zt-mp-0, cluster default/capz-hx45zt: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-hx45zt kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-t7q86
STEP: Fetching kube-system pod logs took 730.283073ms
STEP: Dumping workload cluster default/capz-hx45zt Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-t7q86, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-hx45zt-control-plane-nhv8k
STEP: failed to find events of Pod "kube-scheduler-capz-hx45zt-control-plane-nhv8k"
STEP: Collecting events for Pod kube-system/etcd-capz-hx45zt-control-plane-nhv8k
STEP: Creating log watcher for controller kube-system/calico-node-dcvcp, container calico-node
STEP: failed to find events of Pod "etcd-capz-hx45zt-control-plane-nhv8k"
STEP: Collecting events for Pod kube-system/calico-node-dcvcp
STEP: Creating log watcher for controller kube-system/calico-node-hx5df, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-hx5df
STEP: Creating log watcher for controller kube-system/calico-node-km77p, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-km77p
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-b4bnt, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-b4bnt
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-fxl5b, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-fxl5b
STEP: Creating log watcher for controller kube-system/etcd-capz-hx45zt-control-plane-nhv8k, container etcd
STEP: Collecting events for Pod kube-system/kube-proxy-gs4mm
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-hx45zt-control-plane-nhv8k, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-hx45zt-control-plane-nhv8k
STEP: failed to find events of Pod "kube-apiserver-capz-hx45zt-control-plane-nhv8k"
STEP: Creating log watcher for controller kube-system/kube-proxy-snzft, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-hx45zt-control-plane-nhv8k, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-hx45zt-control-plane-nhv8k
STEP: failed to find events of Pod "kube-controller-manager-capz-hx45zt-control-plane-nhv8k"
STEP: Creating log watcher for controller kube-system/kube-proxy-ph8j9, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-proxy-gs4mm, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-ph8j9
STEP: Collecting events for Pod kube-system/kube-proxy-snzft
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-hx45zt-control-plane-nhv8k, container kube-scheduler
STEP: Fetching activity logs took 2.208698359s
... skipping 17 lines ...