This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-03 20:01
Elapsed52m50s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 626 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-9buiac-kubeconfig; do sleep 1; done"
capz-9buiac-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-9buiac-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-9buiac-control-plane-2xbb8   NotReady   control-plane,master   7s    v1.21.15-rc.0.4+2fef630dd216dd
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-9buiac-control-plane-2xbb8 condition met
node/capz-9buiac-mp-0000000 condition met
... skipping 100 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  3 20:18:31.095: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.973 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:18:36.556: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dkh5n" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Sep  3 20:18:36.659: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 103.064786ms
Sep  3 20:18:38.764: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207382126s
Sep  3 20:18:40.867: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311014212s
Sep  3 20:18:42.972: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415744125s
Sep  3 20:18:45.076: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.520159142s
Sep  3 20:18:47.180: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.623876187s
... skipping 2 lines ...
Sep  3 20:18:53.494: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 16.937379752s
Sep  3 20:18:55.598: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 19.042035522s
Sep  3 20:18:57.702: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 21.14568314s
Sep  3 20:18:59.815: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Pending", Reason="", readiness=false. Elapsed: 23.258454247s
Sep  3 20:19:01.925: INFO: Pod "azuredisk-volume-tester-dkh5n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.368472478s
STEP: Saw pod success
Sep  3 20:19:01.925: INFO: Pod "azuredisk-volume-tester-dkh5n" satisfied condition "Succeeded or Failed"
Sep  3 20:19:01.925: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-dkh5n"
Sep  3 20:19:02.040: INFO: Pod azuredisk-volume-tester-dkh5n has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-dkh5n in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:19:02.364: INFO: deleting PVC "azuredisk-1353"/"pvc-fftl8"
Sep  3 20:19:02.365: INFO: Deleting PersistentVolumeClaim "pvc-fftl8"
STEP: waiting for claim's PV "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" to be deleted
Sep  3 20:19:02.469: INFO: Waiting up to 10m0s for PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 to get deleted
Sep  3 20:19:02.571: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (102.560698ms)
Sep  3 20:19:07.675: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (5.206611762s)
Sep  3 20:19:12.784: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (10.315011828s)
Sep  3 20:19:17.891: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (15.422411179s)
Sep  3 20:19:23.000: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (20.530882644s)
Sep  3 20:19:28.107: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (25.638611025s)
Sep  3 20:19:33.212: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (30.743143689s)
Sep  3 20:19:38.320: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 found and phase=Failed (35.850930313s)
Sep  3 20:19:43.423: INFO: PersistentVolume pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 was removed
Sep  3 20:19:43.423: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  3 20:19:43.526: INFO: Claim "azuredisk-1353" in namespace "pvc-fftl8" doesn't exist in the system
Sep  3 20:19:43.526: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-6mfsk
Sep  3 20:19:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  3 20:20:58.453: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-2g4pd"
Sep  3 20:20:58.570: INFO: Error getting logs for pod azuredisk-volume-tester-2g4pd: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-2g4pd)
STEP: Deleting pod azuredisk-volume-tester-2g4pd in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:20:58.886: INFO: deleting PVC "azuredisk-1563"/"pvc-xvhpb"
Sep  3 20:20:58.886: INFO: Deleting PersistentVolumeClaim "pvc-xvhpb"
STEP: waiting for claim's PV "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" to be deleted
Sep  3 20:20:58.990: INFO: Waiting up to 10m0s for PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 to get deleted
Sep  3 20:20:59.093: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Bound (102.805003ms)
Sep  3 20:21:04.196: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Bound (5.20597242s)
Sep  3 20:21:09.304: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Failed (10.313461056s)
Sep  3 20:21:14.408: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Failed (15.417832394s)
Sep  3 20:21:19.514: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Failed (20.523833071s)
Sep  3 20:21:24.623: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Failed (25.632065314s)
Sep  3 20:21:29.726: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Failed (30.735536107s)
Sep  3 20:21:34.829: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 found and phase=Failed (35.838924494s)
Sep  3 20:21:39.932: INFO: PersistentVolume pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 was removed
Sep  3 20:21:39.932: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Sep  3 20:21:40.034: INFO: Claim "azuredisk-1563" in namespace "pvc-xvhpb" doesn't exist in the system
Sep  3 20:21:40.034: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-qp2mv
Sep  3 20:21:40.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:21:41.885: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7lcdb" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Sep  3 20:21:41.987: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 101.762708ms
Sep  3 20:21:44.091: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205327037s
Sep  3 20:21:46.194: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308856736s
Sep  3 20:21:48.297: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.412053666s
Sep  3 20:21:50.400: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514766996s
Sep  3 20:21:52.503: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617580767s
... skipping 9 lines ...
Sep  3 20:22:13.545: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 31.659315671s
Sep  3 20:22:15.647: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.7620454s
Sep  3 20:22:17.752: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 35.866518143s
Sep  3 20:22:19.862: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Pending", Reason="", readiness=false. Elapsed: 37.976374315s
Sep  3 20:22:21.971: INFO: Pod "azuredisk-volume-tester-7lcdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.085776784s
STEP: Saw pod success
Sep  3 20:22:21.971: INFO: Pod "azuredisk-volume-tester-7lcdb" satisfied condition "Succeeded or Failed"
Sep  3 20:22:21.971: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-7lcdb"
Sep  3 20:22:22.076: INFO: Pod azuredisk-volume-tester-7lcdb has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-7lcdb in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:22:22.397: INFO: deleting PVC "azuredisk-7463"/"pvc-cdcph"
Sep  3 20:22:22.397: INFO: Deleting PersistentVolumeClaim "pvc-cdcph"
STEP: waiting for claim's PV "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" to be deleted
Sep  3 20:22:22.501: INFO: Waiting up to 10m0s for PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe to get deleted
Sep  3 20:22:22.603: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (101.995765ms)
Sep  3 20:22:27.706: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (5.205171511s)
Sep  3 20:22:32.811: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (10.30956143s)
Sep  3 20:22:37.914: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (15.412757972s)
Sep  3 20:22:43.019: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (20.51808798s)
Sep  3 20:22:48.122: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (25.621083806s)
Sep  3 20:22:53.225: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe found and phase=Failed (30.723662962s)
Sep  3 20:22:58.332: INFO: PersistentVolume pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe was removed
Sep  3 20:22:58.332: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7463 to be removed
Sep  3 20:22:58.434: INFO: Claim "azuredisk-7463" in namespace "pvc-cdcph" doesn't exist in the system
Sep  3 20:22:58.434: INFO: deleting StorageClass azuredisk-7463-kubernetes.io-azure-disk-dynamic-sc-lmpc5
Sep  3 20:22:58.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7463" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  3 20:23:00.393: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-cwltk" in namespace "azuredisk-9241" to be "Error status code"
Sep  3 20:23:00.495: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 101.391959ms
Sep  3 20:23:02.598: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204336828s
Sep  3 20:23:04.702: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308245051s
Sep  3 20:23:06.804: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410711281s
Sep  3 20:23:08.907: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513913968s
Sep  3 20:23:11.011: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.61729381s
Sep  3 20:23:13.115: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.721014029s
Sep  3 20:23:15.217: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.82326148s
Sep  3 20:23:17.320: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.926461583s
Sep  3 20:23:19.423: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.029044344s
Sep  3 20:23:21.525: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 21.131528329s
Sep  3 20:23:23.635: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Pending", Reason="", readiness=false. Elapsed: 23.241287288s
Sep  3 20:23:25.746: INFO: Pod "azuredisk-volume-tester-cwltk": Phase="Failed", Reason="", readiness=false. Elapsed: 25.352126552s
STEP: Saw pod failure
Sep  3 20:23:25.746: INFO: Pod "azuredisk-volume-tester-cwltk" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  3 20:23:25.861: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-cwltk"
Sep  3 20:23:25.965: INFO: Pod azuredisk-volume-tester-cwltk has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-cwltk in namespace azuredisk-9241
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:23:26.284: INFO: deleting PVC "azuredisk-9241"/"pvc-h5tv9"
Sep  3 20:23:26.284: INFO: Deleting PersistentVolumeClaim "pvc-h5tv9"
STEP: waiting for claim's PV "pvc-03110910-388a-491c-9ab2-bc0e4e193711" to be deleted
Sep  3 20:23:26.388: INFO: Waiting up to 10m0s for PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 to get deleted
Sep  3 20:23:26.490: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (102.005416ms)
Sep  3 20:23:31.596: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (5.208281978s)
Sep  3 20:23:36.699: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (10.311830618s)
Sep  3 20:23:41.802: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (15.41468346s)
Sep  3 20:23:46.909: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (20.520984453s)
Sep  3 20:23:52.011: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (25.623843409s)
Sep  3 20:23:57.115: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (30.727215936s)
Sep  3 20:24:02.222: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (35.83405037s)
Sep  3 20:24:07.326: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 found and phase=Failed (40.938793916s)
Sep  3 20:24:12.429: INFO: PersistentVolume pvc-03110910-388a-491c-9ab2-bc0e4e193711 was removed
Sep  3 20:24:12.429: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  3 20:24:12.531: INFO: Claim "azuredisk-9241" in namespace "pvc-h5tv9" doesn't exist in the system
Sep  3 20:24:12.531: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-cxq26
Sep  3 20:24:12.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9241" for this suite.
... skipping 53 lines ...
Sep  3 20:25:19.382: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Bound (5.210033026s)
Sep  3 20:25:24.487: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Bound (10.314718139s)
Sep  3 20:25:29.594: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Bound (15.422033415s)
Sep  3 20:25:34.698: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Bound (20.525659787s)
Sep  3 20:25:39.803: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Bound (25.631512736s)
Sep  3 20:25:44.906: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Bound (30.734110123s)
Sep  3 20:25:50.008: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (35.836512216s)
Sep  3 20:25:55.112: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (40.939626369s)
Sep  3 20:26:00.215: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (46.043025523s)
Sep  3 20:26:05.318: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (51.145666839s)
Sep  3 20:26:10.422: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (56.250349687s)
Sep  3 20:26:15.528: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (1m1.356104408s)
Sep  3 20:26:20.634: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c found and phase=Failed (1m6.46213217s)
Sep  3 20:26:25.741: INFO: PersistentVolume pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c was removed
Sep  3 20:26:25.741: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  3 20:26:25.843: INFO: Claim "azuredisk-9336" in namespace "pvc-djnjs" doesn't exist in the system
Sep  3 20:26:25.843: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-4b8qm
Sep  3 20:26:25.947: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-8fw8r"
Sep  3 20:26:26.058: INFO: Pod azuredisk-volume-tester-8fw8r has the following logs: 
... skipping 9 lines ...
Sep  3 20:26:36.809: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Bound (10.31537159s)
Sep  3 20:26:41.915: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Bound (15.421975008s)
Sep  3 20:26:47.019: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Bound (20.525253301s)
Sep  3 20:26:52.122: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Bound (25.628780531s)
Sep  3 20:26:57.227: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Bound (30.733851382s)
Sep  3 20:27:02.331: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Bound (35.8374108s)
Sep  3 20:27:07.434: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (40.940898638s)
Sep  3 20:27:12.541: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (46.04762907s)
Sep  3 20:27:17.648: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (51.154682698s)
Sep  3 20:27:22.752: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (56.258315951s)
Sep  3 20:27:27.859: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (1m1.36574487s)
Sep  3 20:27:32.968: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (1m6.47480878s)
Sep  3 20:27:38.072: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 found and phase=Failed (1m11.579007435s)
Sep  3 20:27:43.176: INFO: PersistentVolume pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 was removed
Sep  3 20:27:43.177: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  3 20:27:43.279: INFO: Claim "azuredisk-9336" in namespace "pvc-xczk8" doesn't exist in the system
Sep  3 20:27:43.279: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-vmq9l
Sep  3 20:27:43.383: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-lnrk7"
Sep  3 20:27:43.494: INFO: Pod azuredisk-volume-tester-lnrk7 has the following logs: 
... skipping 9 lines ...
Sep  3 20:27:54.233: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Bound (10.312321301s)
Sep  3 20:27:59.338: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Bound (15.416618303s)
Sep  3 20:28:04.445: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Bound (20.523755003s)
Sep  3 20:28:09.552: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Bound (25.630800918s)
Sep  3 20:28:14.659: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Bound (30.737774599s)
Sep  3 20:28:19.763: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Bound (35.842195804s)
Sep  3 20:28:24.870: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Failed (40.949442408s)
Sep  3 20:28:29.974: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Failed (46.053019689s)
Sep  3 20:28:35.081: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Failed (51.15954553s)
Sep  3 20:28:40.185: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Failed (56.263727304s)
Sep  3 20:28:45.289: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Failed (1m1.367587405s)
Sep  3 20:28:50.395: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a found and phase=Failed (1m6.473824593s)
Sep  3 20:28:55.499: INFO: PersistentVolume pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a was removed
Sep  3 20:28:55.499: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  3 20:28:55.602: INFO: Claim "azuredisk-9336" in namespace "pvc-rlm5r" doesn't exist in the system
Sep  3 20:28:55.602: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-gj89w
Sep  3 20:28:55.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9336" for this suite.
... skipping 63 lines ...
Sep  3 20:31:54.369: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Bound (10.322476316s)
Sep  3 20:31:59.475: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Bound (15.429118703s)
Sep  3 20:32:04.580: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Bound (20.533921483s)
Sep  3 20:32:09.688: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Bound (25.641869587s)
Sep  3 20:32:14.795: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Bound (30.749173293s)
Sep  3 20:32:19.904: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Bound (35.858109144s)
Sep  3 20:32:25.009: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Failed (40.963163658s)
Sep  3 20:32:30.113: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Failed (46.066588341s)
Sep  3 20:32:35.217: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Failed (51.170798827s)
Sep  3 20:32:40.323: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Failed (56.276965559s)
Sep  3 20:32:45.431: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Failed (1m1.384420586s)
Sep  3 20:32:50.535: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e found and phase=Failed (1m6.488269565s)
Sep  3 20:32:55.642: INFO: PersistentVolume pvc-c63b5f69-2521-455c-89d6-29afffb92b2e was removed
Sep  3 20:32:55.642: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2205 to be removed
Sep  3 20:32:55.745: INFO: Claim "azuredisk-2205" in namespace "pvc-knpjz" doesn't exist in the system
Sep  3 20:32:55.745: INFO: deleting StorageClass azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-znf9s
Sep  3 20:32:55.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2205" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:33:19.028: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qd4t9" in namespace "azuredisk-1387" to be "Succeeded or Failed"
Sep  3 20:33:19.131: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 102.919551ms
Sep  3 20:33:21.235: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206968605s
Sep  3 20:33:23.345: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317022055s
Sep  3 20:33:25.456: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.427733864s
Sep  3 20:33:27.566: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537844285s
Sep  3 20:33:29.677: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648409358s
... skipping 8 lines ...
Sep  3 20:33:48.675: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 29.646724915s
Sep  3 20:33:50.787: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.758908813s
Sep  3 20:33:52.898: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 33.869682861s
Sep  3 20:33:55.008: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.979862698s
Sep  3 20:33:57.119: INFO: Pod "azuredisk-volume-tester-qd4t9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.090523538s
STEP: Saw pod success
Sep  3 20:33:57.119: INFO: Pod "azuredisk-volume-tester-qd4t9" satisfied condition "Succeeded or Failed"
Sep  3 20:33:57.119: INFO: deleting Pod "azuredisk-1387"/"azuredisk-volume-tester-qd4t9"
Sep  3 20:33:57.235: INFO: Pod azuredisk-volume-tester-qd4t9 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-qd4t9 in namespace azuredisk-1387
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:33:57.556: INFO: deleting PVC "azuredisk-1387"/"pvc-w2fpv"
Sep  3 20:33:57.556: INFO: Deleting PersistentVolumeClaim "pvc-w2fpv"
STEP: waiting for claim's PV "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" to be deleted
Sep  3 20:33:57.661: INFO: Waiting up to 10m0s for PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 to get deleted
Sep  3 20:33:57.763: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (102.16886ms)
Sep  3 20:34:02.866: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (5.20521995s)
Sep  3 20:34:07.972: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (10.311009327s)
Sep  3 20:34:13.076: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (15.41456509s)
Sep  3 20:34:18.184: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (20.523293616s)
Sep  3 20:34:23.288: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (25.627377347s)
Sep  3 20:34:28.396: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (30.734762753s)
Sep  3 20:34:33.503: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (35.842397193s)
Sep  3 20:34:38.609: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (40.947542106s)
Sep  3 20:34:43.712: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (46.050647005s)
Sep  3 20:34:48.819: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 found and phase=Failed (51.158133654s)
Sep  3 20:34:53.926: INFO: PersistentVolume pvc-efb891df-4bc2-4b12-9f95-b636e179f837 was removed
Sep  3 20:34:53.927: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  3 20:34:54.029: INFO: Claim "azuredisk-1387" in namespace "pvc-w2fpv" doesn't exist in the system
Sep  3 20:34:54.029: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-fb58f
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:34:54.341: INFO: deleting PVC "azuredisk-1387"/"pvc-lvlbc"
Sep  3 20:34:54.341: INFO: Deleting PersistentVolumeClaim "pvc-lvlbc"
STEP: waiting for claim's PV "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" to be deleted
Sep  3 20:34:54.446: INFO: Waiting up to 10m0s for PersistentVolume pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 to get deleted
Sep  3 20:34:54.548: INFO: PersistentVolume pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 found and phase=Failed (102.777008ms)
Sep  3 20:34:59.652: INFO: PersistentVolume pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 found and phase=Failed (5.206637769s)
Sep  3 20:35:04.759: INFO: PersistentVolume pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 found and phase=Failed (10.312989259s)
Sep  3 20:35:09.862: INFO: PersistentVolume pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 was removed
Sep  3 20:35:09.862: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  3 20:35:09.965: INFO: Claim "azuredisk-1387" in namespace "pvc-lvlbc" doesn't exist in the system
Sep  3 20:35:09.965: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-m7rk2
STEP: validating provisioned PV
STEP: checking the PV
... skipping 39 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:35:22.877: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-b2rgs" in namespace "azuredisk-4547" to be "Succeeded or Failed"
Sep  3 20:35:22.980: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 102.771495ms
Sep  3 20:35:25.084: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207132348s
Sep  3 20:35:27.195: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318101677s
Sep  3 20:35:29.306: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428386225s
Sep  3 20:35:31.416: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.538281552s
Sep  3 20:35:33.526: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.648708743s
... skipping 8 lines ...
Sep  3 20:35:52.523: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 29.645574185s
Sep  3 20:35:54.633: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 31.755775681s
Sep  3 20:35:56.744: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 33.866745181s
Sep  3 20:35:58.853: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Pending", Reason="", readiness=false. Elapsed: 35.976105995s
Sep  3 20:36:00.963: INFO: Pod "azuredisk-volume-tester-b2rgs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.08548237s
STEP: Saw pod success
Sep  3 20:36:00.963: INFO: Pod "azuredisk-volume-tester-b2rgs" satisfied condition "Succeeded or Failed"
Sep  3 20:36:00.963: INFO: deleting Pod "azuredisk-4547"/"azuredisk-volume-tester-b2rgs"
Sep  3 20:36:01.075: INFO: Pod azuredisk-volume-tester-b2rgs has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.059552 seconds, 1.6GB/s
hello world

STEP: Deleting pod azuredisk-volume-tester-b2rgs in namespace azuredisk-4547
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:36:01.402: INFO: deleting PVC "azuredisk-4547"/"pvc-pt244"
Sep  3 20:36:01.402: INFO: Deleting PersistentVolumeClaim "pvc-pt244"
STEP: waiting for claim's PV "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" to be deleted
Sep  3 20:36:01.506: INFO: Waiting up to 10m0s for PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f to get deleted
Sep  3 20:36:01.609: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (102.759645ms)
Sep  3 20:36:06.715: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (5.208945169s)
Sep  3 20:36:11.819: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (10.312425346s)
Sep  3 20:36:16.926: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (15.419825125s)
Sep  3 20:36:22.030: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (20.523518418s)
Sep  3 20:36:27.138: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (25.631278407s)
Sep  3 20:36:32.245: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (30.738070132s)
Sep  3 20:36:37.349: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f found and phase=Failed (35.842248458s)
Sep  3 20:36:42.456: INFO: PersistentVolume pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f was removed
Sep  3 20:36:42.456: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-4547 to be removed
Sep  3 20:36:42.560: INFO: Claim "azuredisk-4547" in namespace "pvc-pt244" doesn't exist in the system
Sep  3 20:36:42.560: INFO: deleting StorageClass azuredisk-4547-kubernetes.io-azure-disk-dynamic-sc-xnw8k
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 20:36:58.538: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2p4sf" in namespace "azuredisk-7578" to be "Succeeded or Failed"
Sep  3 20:36:58.641: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 102.916226ms
Sep  3 20:37:00.745: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207413683s
Sep  3 20:37:02.855: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.317188422s
Sep  3 20:37:04.971: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433553621s
Sep  3 20:37:07.082: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.54410648s
Sep  3 20:37:09.193: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654980793s
... skipping 23 lines ...
Sep  3 20:37:59.861: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.323262678s
Sep  3 20:38:01.972: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.433837765s
Sep  3 20:38:04.082: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.543927699s
Sep  3 20:38:06.192: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.65456593s
Sep  3 20:38:08.302: INFO: Pod "azuredisk-volume-tester-2p4sf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m9.764324889s
STEP: Saw pod success
Sep  3 20:38:08.302: INFO: Pod "azuredisk-volume-tester-2p4sf" satisfied condition "Succeeded or Failed"
Sep  3 20:38:08.302: INFO: deleting Pod "azuredisk-7578"/"azuredisk-volume-tester-2p4sf"
Sep  3 20:38:08.414: INFO: Pod azuredisk-volume-tester-2p4sf has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-2p4sf in namespace azuredisk-7578
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:38:08.737: INFO: deleting PVC "azuredisk-7578"/"pvc-ntfnm"
Sep  3 20:38:08.737: INFO: Deleting PersistentVolumeClaim "pvc-ntfnm"
STEP: waiting for claim's PV "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" to be deleted
Sep  3 20:38:08.845: INFO: Waiting up to 10m0s for PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f to get deleted
Sep  3 20:38:08.948: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f found and phase=Failed (102.752618ms)
Sep  3 20:38:14.055: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f found and phase=Failed (5.20922034s)
Sep  3 20:38:19.159: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f found and phase=Failed (10.313839821s)
Sep  3 20:38:24.265: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f found and phase=Failed (15.419450094s)
Sep  3 20:38:29.373: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f found and phase=Failed (20.527430095s)
Sep  3 20:38:34.477: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f found and phase=Failed (25.631987022s)
Sep  3 20:38:39.584: INFO: PersistentVolume pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f was removed
Sep  3 20:38:39.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  3 20:38:39.687: INFO: Claim "azuredisk-7578" in namespace "pvc-ntfnm" doesn't exist in the system
Sep  3 20:38:39.687: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-cbzpv
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:38:39.998: INFO: deleting PVC "azuredisk-7578"/"pvc-8lp2w"
Sep  3 20:38:39.998: INFO: Deleting PersistentVolumeClaim "pvc-8lp2w"
STEP: waiting for claim's PV "pvc-df7cf21e-c296-4449-a855-32571dc286e4" to be deleted
Sep  3 20:38:40.103: INFO: Waiting up to 10m0s for PersistentVolume pvc-df7cf21e-c296-4449-a855-32571dc286e4 to get deleted
Sep  3 20:38:40.206: INFO: PersistentVolume pvc-df7cf21e-c296-4449-a855-32571dc286e4 found and phase=Failed (102.764565ms)
Sep  3 20:38:45.310: INFO: PersistentVolume pvc-df7cf21e-c296-4449-a855-32571dc286e4 found and phase=Failed (5.206689328s)
Sep  3 20:38:50.417: INFO: PersistentVolume pvc-df7cf21e-c296-4449-a855-32571dc286e4 found and phase=Failed (10.313840054s)
Sep  3 20:38:55.520: INFO: PersistentVolume pvc-df7cf21e-c296-4449-a855-32571dc286e4 was removed
Sep  3 20:38:55.520: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  3 20:38:55.623: INFO: Claim "azuredisk-7578" in namespace "pvc-8lp2w" doesn't exist in the system
Sep  3 20:38:55.623: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-9x4rf
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 20:38:55.934: INFO: deleting PVC "azuredisk-7578"/"pvc-lj7pm"
Sep  3 20:38:55.934: INFO: Deleting PersistentVolumeClaim "pvc-lj7pm"
STEP: waiting for claim's PV "pvc-140e723b-90be-42c6-a322-5026194a3f94" to be deleted
Sep  3 20:38:56.040: INFO: Waiting up to 10m0s for PersistentVolume pvc-140e723b-90be-42c6-a322-5026194a3f94 to get deleted
Sep  3 20:38:56.143: INFO: PersistentVolume pvc-140e723b-90be-42c6-a322-5026194a3f94 found and phase=Failed (102.907408ms)
Sep  3 20:39:01.246: INFO: PersistentVolume pvc-140e723b-90be-42c6-a322-5026194a3f94 found and phase=Failed (5.206601379s)
Sep  3 20:39:06.350: INFO: PersistentVolume pvc-140e723b-90be-42c6-a322-5026194a3f94 found and phase=Failed (10.310210004s)
Sep  3 20:39:11.455: INFO: PersistentVolume pvc-140e723b-90be-42c6-a322-5026194a3f94 was removed
Sep  3 20:39:11.455: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  3 20:39:11.558: INFO: Claim "azuredisk-7578" in namespace "pvc-lj7pm" doesn't exist in the system
Sep  3 20:39:11.558: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-l7hbs
Sep  3 20:39:11.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7578" for this suite.
... skipping 489 lines ...
I0903 20:14:06.899516       1 tlsconfig.go:178] loaded client CA [1/"client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt"]: "kubernetes" [] issuer="<self>" (2022-09-03 20:06:59 +0000 UTC to 2032-08-31 20:11:59 +0000 UTC (now=2022-09-03 20:14:06.899501479 +0000 UTC))
I0903 20:14:06.899883       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662236045" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662236045" (2022-09-03 19:14:04 +0000 UTC to 2023-09-03 19:14:04 +0000 UTC (now=2022-09-03 20:14:06.899869863 +0000 UTC))
I0903 20:14:06.900210       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662236046" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662236046" (2022-09-03 19:14:05 +0000 UTC to 2023-09-03 19:14:05 +0000 UTC (now=2022-09-03 20:14:06.90019335 +0000 UTC))
I0903 20:14:06.900383       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0903 20:14:06.900508       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0903 20:14:06.901110       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0903 20:14:09.130451       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0903 20:14:09.130512       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0903 20:14:13.039895       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0903 20:14:13.040577       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-9buiac-control-plane-2xbb8_7d91d302-38bf-464a-8f52-210633c71d0c became leader"
I0903 20:14:13.170040       1 request.go:600] Waited for 94.283303ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1?timeout=32s
I0903 20:14:13.219628       1 request.go:600] Waited for 143.859902ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0903 20:14:13.270012       1 request.go:600] Waited for 194.221148ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1?timeout=32s
I0903 20:14:13.320008       1 request.go:600] Waited for 244.155969ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1beta1?timeout=32s
... skipping 39 lines ...
I0903 20:14:13.624364       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0903 20:14:13.624480       1 reflector.go:219] Starting reflector *v1.Secret (13h50m57.209224508s) from k8s.io/client-go/informers/factory.go:134
I0903 20:14:13.624491       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0903 20:14:13.624603       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0903 20:14:13.624346       1 reflector.go:219] Starting reflector *v1.ServiceAccount (13h50m57.209224508s) from k8s.io/client-go/informers/factory.go:134
I0903 20:14:13.624623       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0903 20:14:13.653722       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0903 20:14:13.653748       1 controllermanager.go:559] Starting "route"
I0903 20:14:13.653757       1 core.go:241] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0903 20:14:13.653786       1 controllermanager.go:566] Skipping "route"
I0903 20:14:13.653794       1 controllermanager.go:559] Starting "cloud-node-lifecycle"
I0903 20:14:13.658912       1 node_lifecycle_controller.go:76] Sending events to api server
I0903 20:14:13.658951       1 controllermanager.go:574] Started "cloud-node-lifecycle"
... skipping 114 lines ...
I0903 20:14:14.929046       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0903 20:14:14.929207       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0903 20:14:14.929224       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0903 20:14:14.929238       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0903 20:14:14.929247       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0903 20:14:14.929257       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0903 20:14:14.929304       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 20:14:14.929318       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0903 20:14:14.929458       1 controllermanager.go:574] Started "attachdetach"
I0903 20:14:14.929477       1 controllermanager.go:559] Starting "pv-protection"
I0903 20:14:14.929526       1 attach_detach_controller.go:328] Starting attach detach controller
I0903 20:14:14.929535       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0903 20:14:14.929581       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-control-plane-2xbb8"
W0903 20:14:14.929604       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-9buiac-control-plane-2xbb8" does not exist
I0903 20:14:14.985945       1 request.go:600] Waited for 58.125996ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/attachdetach-controller
I0903 20:14:15.080358       1 controllermanager.go:574] Started "pv-protection"
I0903 20:14:15.080390       1 controllermanager.go:559] Starting "resourcequota"
I0903 20:14:15.080428       1 pv_protection_controller.go:83] Starting PV protection controller
I0903 20:14:15.080438       1 shared_informer.go:240] Waiting for caches to sync for PV protection
I0903 20:14:15.136017       1 request.go:600] Waited for 56.325458ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/pv-protection-controller
... skipping 115 lines ...
I0903 20:14:17.028012       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0903 20:14:17.028022       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0903 20:14:17.028081       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0903 20:14:17.028114       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0903 20:14:17.028146       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0903 20:14:17.028157       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0903 20:14:17.028180       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 20:14:17.028189       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0903 20:14:17.028254       1 controllermanager.go:574] Started "persistentvolume-binder"
I0903 20:14:17.028265       1 controllermanager.go:559] Starting "ephemeral-volume"
I0903 20:14:17.028311       1 pv_controller_base.go:308] Starting persistent volume controller
I0903 20:14:17.028368       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0903 20:14:17.178422       1 controllermanager.go:574] Started "ephemeral-volume"
... skipping 499 lines ...
I0903 20:14:18.826962       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0903 20:14:18.826008       1 request.go:600] Waited for 290.421324ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/clusterrole-aggregation-controller/token
I0903 20:14:18.827256       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0903 20:14:18.827551       1 daemon_controller.go:1103] Updating daemon set status
I0903 20:14:18.827725       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (6.31756ms)
I0903 20:14:18.829315       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="316.329546ms"
I0903 20:14:18.829587       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:14:18.829786       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 20:14:18.82976448 +0000 UTC m=+14.061972373"
I0903 20:14:18.829537       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="615.786969ms"
I0903 20:14:18.830185       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:14:18.830346       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 20:14:18.830327767 +0000 UTC m=+14.062535560"
I0903 20:14:18.831075       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 20:14:18 +0000 UTC - now: 2022-09-03 20:14:18.831069951 +0000 UTC m=+14.063277744]
I0903 20:14:18.831795       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 20:14:18 +0000 UTC - now: 2022-09-03 20:14:18.831789735 +0000 UTC m=+14.063997528]
I0903 20:14:18.838099       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0903 20:14:18.846400       1 shared_informer.go:270] caches populated
I0903 20:14:18.846421       1 shared_informer.go:247] Caches are synced for garbage collector 
... skipping 7 lines ...
I0903 20:14:18.855349       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="766.083µs"
I0903 20:14:18.859104       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="29.323346ms"
I0903 20:14:18.859159       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 20:14:18.859123825 +0000 UTC m=+14.091331718"
I0903 20:14:18.860246       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0903 20:14:18.860434       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 20:14:18 +0000 UTC - now: 2022-09-03 20:14:18.860429096 +0000 UTC m=+14.092636889]
I0903 20:14:18.872565       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="13.424401ms"
I0903 20:14:18.872734       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:14:18.872888       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 20:14:18.872869319 +0000 UTC m=+14.105077212"
I0903 20:14:18.873489       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 20:14:18 +0000 UTC - now: 2022-09-03 20:14:18.873482905 +0000 UTC m=+14.105690698]
I0903 20:14:18.873671       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 599s
I0903 20:14:18.873896       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="977.778µs"
I0903 20:14:18.874942       1 request.go:600] Waited for 297.029876ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/endpoint-controller/token
I0903 20:14:18.878649       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 20:14:18.87862339 +0000 UTC m=+14.110831283"
... skipping 340 lines ...
I0903 20:14:41.767142       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0b4c6db7f0fc, ext:36999237393, loc:(*time.Location)(0x731ea80)}}
I0903 20:14:41.767182       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0b4c6dba3e11, ext:36999388098, loc:(*time.Location)(0x731ea80)}}
I0903 20:14:41.767208       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0903 20:14:41.767243       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0903 20:14:41.767258       1 daemon_controller.go:1103] Updating daemon set status
I0903 20:14:41.767300       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (3.937429ms)
I0903 20:14:43.182383       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-9buiac-control-plane-2xbb8 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 20:14:28 +0000 UTC,LastTransitionTime:2022-09-03 20:13:56 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 20:14:38 +0000 UTC,LastTransitionTime:2022-09-03 20:14:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 20:14:43.182537       1 node_lifecycle_controller.go:1047] Node capz-9buiac-control-plane-2xbb8 ReadyCondition updated. Updating timestamp.
I0903 20:14:43.182570       1 node_lifecycle_controller.go:893] Node capz-9buiac-control-plane-2xbb8 is healthy again, removing all taints
I0903 20:14:43.182594       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0903 20:14:43.214498       1 daemon_controller.go:571] Pod calico-node-8mvrb updated.
I0903 20:14:43.216493       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0b4c6dba3e11, ext:36999388098, loc:(*time.Location)(0x731ea80)}}
I0903 20:14:43.216728       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0b4cccea9ae8, ext:38448909465, loc:(*time.Location)(0x731ea80)}}
... skipping 350 lines ...
I0903 20:16:24.185925       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0b4cd343754b, ext:38555395836, loc:(*time.Location)(0x731ea80)}}
I0903 20:16:24.189378       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd0b660b498ef2, ext:139421577991, loc:(*time.Location)(0x731ea80)}}
I0903 20:16:24.189542       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-9buiac-mp-0000000], creating 1
I0903 20:16:24.190273       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-9buiac-mp-0000000}
I0903 20:16:24.191246       1 taint_manager.go:440] "Updating known taints on node" node="capz-9buiac-mp-0000000" taints=[]
I0903 20:16:24.191414       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
W0903 20:16:24.191560       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-9buiac-mp-0000000" does not exist
I0903 20:16:24.192200       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0b4833904845, ext:20097301494, loc:(*time.Location)(0x731ea80)}}
I0903 20:16:24.193781       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0b660b8cc690, ext:139425983041, loc:(*time.Location)(0x731ea80)}}
I0903 20:16:24.193933       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-9buiac-mp-0000000], creating 1
I0903 20:16:24.205800       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:16:24.224804       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:16:24.239759       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-9buiac-mp-0000000" new_ttl="0s"
... skipping 4 lines ...
I0903 20:16:24.257664       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 20:16:24.257672       1 controller.go:790] Finished updateLoadBalancerHosts
I0903 20:16:24.257679       1 controller.go:731] It took 1.7e-05 seconds to finish nodeSyncInternal
I0903 20:16:24.260025       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-9buiac-mp-0000001}
I0903 20:16:24.260188       1 taint_manager.go:440] "Updating known taints on node" node="capz-9buiac-mp-0000001" taints=[]
I0903 20:16:24.260309       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
W0903 20:16:24.260412       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-9buiac-mp-0000001" does not exist
I0903 20:16:24.288334       1 daemon_controller.go:514] Pod kube-proxy-k7pcr added.
I0903 20:16:24.288368       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd0b660b8cc690, ext:139425983041, loc:(*time.Location)(0x731ea80)}}
I0903 20:16:24.288870       1 disruption.go:415] addPod called on pod "kube-proxy-k7pcr"
I0903 20:16:24.289034       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-k7pcr, PodDisruptionBudget controller will avoid syncing.
I0903 20:16:24.289048       1 disruption.go:418] No matching pdb for pod "kube-proxy-k7pcr"
I0903 20:16:24.289185       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-k7pcr" podUID=a04f7e40-161d-4ccf-9631-197e47b84171
... skipping 419 lines ...
I0903 20:16:44.451183       1 controller_utils.go:209] Added [] Taint to Node capz-9buiac-mp-0000001
I0903 20:16:44.458943       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:16:44.459265       1 controller_utils.go:221] Made sure that Node capz-9buiac-mp-0000001 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0903 20:16:48.121207       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:16:48.123362       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:16:48.144711       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:16:48.201604       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-9buiac-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 20:16:34 +0000 UTC,LastTransitionTime:2022-09-03 20:16:24 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 20:16:44 +0000 UTC,LastTransitionTime:2022-09-03 20:16:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 20:16:48.201689       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 20:16:48.216896       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:16:48.217037       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-9buiac-mp-0000000}
I0903 20:16:48.217088       1 taint_manager.go:440] "Updating known taints on node" node="capz-9buiac-mp-0000000" taints=[]
I0903 20:16:48.217119       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-9buiac-mp-0000000"
I0903 20:16:48.217464       1 node_lifecycle_controller.go:893] Node capz-9buiac-mp-0000000 is healthy again, removing all taints
I0903 20:16:48.217644       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-9buiac-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 20:16:34 +0000 UTC,LastTransitionTime:2022-09-03 20:16:24 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 20:16:44 +0000 UTC,LastTransitionTime:2022-09-03 20:16:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 20:16:48.217824       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:16:48.322018       1 node_lifecycle_controller.go:893] Node capz-9buiac-mp-0000001 is healthy again, removing all taints
I0903 20:16:48.322283       1 node_lifecycle_controller.go:1214] Controller detected that zone northeurope::0 is now in state Normal.
I0903 20:16:48.322449       1 node_lifecycle_controller.go:1214] Controller detected that zone northeurope::1 is now in state Normal.
I0903 20:16:48.323147       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:16:48.323431       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-9buiac-mp-0000001}
... skipping 156 lines ...
I0903 20:18:33.520479       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="77.301µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:57264" resp=200
I0903 20:18:33.960976       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (3.1µs)
I0903 20:18:34.075353       1 publisher.go:181] Finished syncing namespace "azuredisk-5194" (6.686276ms)
I0903 20:18:34.077389       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (8.987502ms)
I0903 20:18:34.095512       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0903 20:18:34.124192       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-tjcl2, uid 2c91f602-faa0-4589-9afe-2b67cbaba00a, event type delete
E0903 20:18:34.134777       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-xftpw" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0903 20:18:34.135458       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0903 20:18:34.135796       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.8µs)
I0903 20:18:34.135823       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8081, name default, uid 97a84caf-2252-4d18-bf53-780c66f967c1, event type delete
I0903 20:18:34.244289       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8081, name kube-root-ca.crt, uid e9e70d1f-bc43-4c90-a8a1-eae922353380, event type delete
I0903 20:18:34.246767       1 publisher.go:181] Finished syncing namespace "azuredisk-8081" (2.308426ms)
I0903 20:18:34.266655       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.3µs)
... skipping 298 lines ...
I0903 20:19:02.443729       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: claim azuredisk-1353/pvc-fftl8 not found
I0903 20:19:02.443819       1 pv_controller.go:1108] reclaimVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: policy is Delete
I0903 20:19:02.443898       1 pv_controller.go:1753] scheduleOperation[delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]]
I0903 20:19:02.443982       1 pv_controller.go:1764] operation "delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]" is already running, skipping
I0903 20:19:02.445474       1 pv_controller.go:1341] isVolumeReleased[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is released
I0903 20:19:02.445490       1 pv_controller.go:1405] doDeleteVolume [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]
I0903 20:19:02.467429       1 pv_controller.go:1260] deletion of volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:19:02.467448       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: set phase Failed
I0903 20:19:02.467457       1 pv_controller.go:858] updating PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: set phase Failed
I0903 20:19:02.470377       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" with version 1290
I0903 20:19:02.470625       1 pv_controller.go:879] volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" entered phase "Failed"
I0903 20:19:02.470813       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" with version 1290
I0903 20:19:02.470850       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase: Failed, bound to: "azuredisk-1353/pvc-fftl8 (uid: 349846b4-5ad7-4687-be5b-e2ce5e8e7a56)", boundByController: true
I0903 20:19:02.470874       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is bound to claim azuredisk-1353/pvc-fftl8
I0903 20:19:02.470895       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: claim azuredisk-1353/pvc-fftl8 not found
I0903 20:19:02.470905       1 pv_controller.go:1108] reclaimVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: policy is Delete
I0903 20:19:02.470917       1 pv_controller.go:1753] scheduleOperation[delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]]
I0903 20:19:02.470929       1 pv_controller.go:1764] operation "delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]" is already running, skipping
I0903 20:19:02.470811       1 pv_controller.go:901] volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
E0903 20:19:02.470994       1 goroutinemap.go:150] Operation for "delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]" failed. No retries permitted until 2022-09-03 20:19:02.970964584 +0000 UTC m=+298.203172377 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:19:02.470795       1 pv_protection_controller.go:205] Got event on PV pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56
I0903 20:19:02.471218       1 event.go:291] "Event occurred" object="pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:19:03.128835       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:19:03.152103       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:19:03.152323       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" with version 1290
I0903 20:19:03.152388       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase: Failed, bound to: "azuredisk-1353/pvc-fftl8 (uid: 349846b4-5ad7-4687-be5b-e2ce5e8e7a56)", boundByController: true
I0903 20:19:03.152423       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is bound to claim azuredisk-1353/pvc-fftl8
I0903 20:19:03.152469       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: claim azuredisk-1353/pvc-fftl8 not found
I0903 20:19:03.152480       1 pv_controller.go:1108] reclaimVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: policy is Delete
I0903 20:19:03.152498       1 pv_controller.go:1753] scheduleOperation[delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]]
I0903 20:19:03.152558       1 pv_controller.go:1232] deleteVolumeOperation [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56] started
I0903 20:19:03.156768       1 pv_controller.go:1341] isVolumeReleased[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is released
I0903 20:19:03.156787       1 pv_controller.go:1405] doDeleteVolume [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]
I0903 20:19:03.180509       1 pv_controller.go:1260] deletion of volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:19:03.180533       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: set phase Failed
I0903 20:19:03.180544       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase Failed already set
E0903 20:19:03.180610       1 goroutinemap.go:150] Operation for "delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]" failed. No retries permitted until 2022-09-03 20:19:04.180578106 +0000 UTC m=+299.412785999 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:19:03.520378       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:58354" resp=200
I0903 20:19:04.538502       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:19:04.538558       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56 to the node "capz-9buiac-mp-0000001" mounted false
I0903 20:19:04.566747       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-9buiac-mp-0000001" succeeded. VolumesAttached: []
I0903 20:19:04.566956       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56") on node "capz-9buiac-mp-0000001" 
I0903 20:19:04.567269       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
... skipping 10 lines ...
I0903 20:19:18.126990       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:19:18.129167       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:19:18.147601       1 gc_controller.go:161] GC'ing orphaned
I0903 20:19:18.147624       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:19:18.152870       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:19:18.152921       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" with version 1290
I0903 20:19:18.152955       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase: Failed, bound to: "azuredisk-1353/pvc-fftl8 (uid: 349846b4-5ad7-4687-be5b-e2ce5e8e7a56)", boundByController: true
I0903 20:19:18.152990       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is bound to claim azuredisk-1353/pvc-fftl8
I0903 20:19:18.153007       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: claim azuredisk-1353/pvc-fftl8 not found
I0903 20:19:18.153015       1 pv_controller.go:1108] reclaimVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: policy is Delete
I0903 20:19:18.153031       1 pv_controller.go:1753] scheduleOperation[delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]]
I0903 20:19:18.153063       1 pv_controller.go:1232] deleteVolumeOperation [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56] started
I0903 20:19:18.160817       1 pv_controller.go:1341] isVolumeReleased[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is released
I0903 20:19:18.160834       1 pv_controller.go:1405] doDeleteVolume [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]
I0903 20:19:18.160866       1 pv_controller.go:1260] deletion of volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) since it's in attaching or detaching state
I0903 20:19:18.161006       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: set phase Failed
I0903 20:19:18.161022       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase Failed already set
E0903 20:19:18.161057       1 goroutinemap.go:150] Operation for "delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]" failed. No retries permitted until 2022-09-03 20:19:20.16103195 +0000 UTC m=+315.393239743 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56) since it's in attaching or detaching state"
I0903 20:19:18.221127       1 controller.go:272] Triggering nodeSync
I0903 20:19:18.221157       1 controller.go:291] nodeSync has been triggered
I0903 20:19:18.221165       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 20:19:18.221174       1 controller.go:790] Finished updateLoadBalancerHosts
I0903 20:19:18.221180       1 controller.go:731] It took 1.69e-05 seconds to finish nodeSyncInternal
I0903 20:19:18.399409       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
... skipping 8 lines ...
I0903 20:19:26.293233       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 30 items received
I0903 20:19:28.346010       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:19:29.468789       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 12 items received
I0903 20:19:33.130296       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:19:33.153689       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:19:33.153939       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" with version 1290
I0903 20:19:33.154012       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase: Failed, bound to: "azuredisk-1353/pvc-fftl8 (uid: 349846b4-5ad7-4687-be5b-e2ce5e8e7a56)", boundByController: true
I0903 20:19:33.154167       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is bound to claim azuredisk-1353/pvc-fftl8
I0903 20:19:33.154210       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: claim azuredisk-1353/pvc-fftl8 not found
I0903 20:19:33.154224       1 pv_controller.go:1108] reclaimVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: policy is Delete
I0903 20:19:33.154332       1 pv_controller.go:1753] scheduleOperation[delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]]
I0903 20:19:33.154415       1 pv_controller.go:1232] deleteVolumeOperation [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56] started
I0903 20:19:33.163446       1 pv_controller.go:1341] isVolumeReleased[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is released
... skipping 5 lines ...
I0903 20:19:38.385546       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56
I0903 20:19:38.385585       1 pv_controller.go:1436] volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" deleted
I0903 20:19:38.385599       1 pv_controller.go:1284] deleteVolumeOperation [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: success
I0903 20:19:38.394872       1 pv_protection_controller.go:205] Got event on PV pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56
I0903 20:19:38.395196       1 pv_protection_controller.go:125] Processing PV pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56
I0903 20:19:38.395025       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" with version 1345
I0903 20:19:38.395802       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: phase: Failed, bound to: "azuredisk-1353/pvc-fftl8 (uid: 349846b4-5ad7-4687-be5b-e2ce5e8e7a56)", boundByController: true
I0903 20:19:38.395917       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: volume is bound to claim azuredisk-1353/pvc-fftl8
I0903 20:19:38.396000       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: claim azuredisk-1353/pvc-fftl8 not found
I0903 20:19:38.396077       1 pv_controller.go:1108] reclaimVolume[pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56]: policy is Delete
I0903 20:19:38.396097       1 pv_controller.go:1753] scheduleOperation[delete-pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56[c1026bc6-d69b-4290-9f4d-866aecf60f24]]
I0903 20:19:38.396121       1 pv_controller.go:1232] deleteVolumeOperation [pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56] started
I0903 20:19:38.398757       1 pv_controller.go:1244] Volume "pvc-349846b4-5ad7-4687-be5b-e2ce5e8e7a56" is already being deleted
... skipping 74 lines ...
I0903 20:19:50.275432       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2888
I0903 20:19:50.314998       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2888, name kube-root-ca.crt, uid 5b648e2d-1253-4850-8cbf-3d720f108c74, event type delete
I0903 20:19:50.317785       1 publisher.go:181] Finished syncing namespace "azuredisk-2888" (2.713541ms)
I0903 20:19:50.345992       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2888, name default-token-5q4z6, uid d4a498e1-c13c-451b-bf6f-f95736257f28, event type delete
I0903 20:19:50.358749       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2888" (2.4µs)
I0903 20:19:50.358802       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2888, name default, uid fad11110-6c58-4751-a951-53bf431af129, event type delete
E0903 20:19:50.358892       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2888/default: secrets "default-token-68mzc" is forbidden: unable to create new content in namespace azuredisk-2888 because it is being terminated
I0903 20:19:50.358986       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2888/default), service account deleted, removing tokens
I0903 20:19:50.447390       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2888" (2.6µs)
I0903 20:19:50.447604       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2888, estimate: 0, errors: <nil>
I0903 20:19:50.455795       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2888" (185.187623ms)
I0903 20:19:50.476875       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 StorageAccountType:StandardSSD_LRS Size:10
I0903 20:19:50.509530       1 azure_managedDiskController.go:384] Azure disk "capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" is not zoned
... skipping 326 lines ...
I0903 20:21:04.414858       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: claim azuredisk-1563/pvc-xvhpb not found
I0903 20:21:04.414866       1 pv_controller.go:1108] reclaimVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: policy is Delete
I0903 20:21:04.414879       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]]
I0903 20:21:04.414950       1 pv_controller.go:1764] operation "delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]" is already running, skipping
I0903 20:21:04.416517       1 pv_controller.go:1341] isVolumeReleased[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is released
I0903 20:21:04.416533       1 pv_controller.go:1405] doDeleteVolume [pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]
I0903 20:21:04.466020       1 pv_controller.go:1260] deletion of volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:21:04.466048       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: set phase Failed
I0903 20:21:04.466058       1 pv_controller.go:858] updating PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: set phase Failed
I0903 20:21:04.470142       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" with version 1539
I0903 20:21:04.470677       1 pv_controller.go:879] volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" entered phase "Failed"
I0903 20:21:04.470738       1 pv_controller.go:901] volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:21:04.470188       1 pv_protection_controller.go:205] Got event on PV pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367
I0903 20:21:04.470206       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" with version 1539
I0903 20:21:04.470937       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: phase: Failed, bound to: "azuredisk-1563/pvc-xvhpb (uid: 7a2f86d7-1954-4503-b15b-e8d6cc4da367)", boundByController: true
I0903 20:21:04.471017       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is bound to claim azuredisk-1563/pvc-xvhpb
I0903 20:21:04.471087       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: claim azuredisk-1563/pvc-xvhpb not found
I0903 20:21:04.471099       1 pv_controller.go:1108] reclaimVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: policy is Delete
I0903 20:21:04.471131       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]]
E0903 20:21:04.470828       1 goroutinemap.go:150] Operation for "delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]" failed. No retries permitted until 2022-09-03 20:21:04.970765841 +0000 UTC m=+420.202973634 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:21:04.471181       1 pv_controller.go:1766] operation "delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]" postponed due to exponential backoff
I0903 20:21:04.471332       1 event.go:291] "Event occurred" object="pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:21:06.355624       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:21:10.127356       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 98 items received
I0903 20:21:13.520333       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="102.001µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:33202" resp=200
I0903 20:21:14.645870       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
... skipping 10 lines ...
I0903 20:21:18.127034       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:21:18.133183       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:21:18.150545       1 gc_controller.go:161] GC'ing orphaned
I0903 20:21:18.150577       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:21:18.158727       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:21:18.158793       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" with version 1539
I0903 20:21:18.158834       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: phase: Failed, bound to: "azuredisk-1563/pvc-xvhpb (uid: 7a2f86d7-1954-4503-b15b-e8d6cc4da367)", boundByController: true
I0903 20:21:18.158886       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is bound to claim azuredisk-1563/pvc-xvhpb
I0903 20:21:18.158915       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: claim azuredisk-1563/pvc-xvhpb not found
I0903 20:21:18.158925       1 pv_controller.go:1108] reclaimVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: policy is Delete
I0903 20:21:18.158941       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]]
I0903 20:21:18.158982       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367] started
I0903 20:21:18.167065       1 pv_controller.go:1341] isVolumeReleased[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is released
I0903 20:21:18.167086       1 pv_controller.go:1405] doDeleteVolume [pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]
I0903 20:21:18.167120       1 pv_controller.go:1260] deletion of volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367) since it's in attaching or detaching state
I0903 20:21:18.167138       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: set phase Failed
I0903 20:21:18.167148       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: phase Failed already set
E0903 20:21:18.167177       1 goroutinemap.go:150] Operation for "delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]" failed. No retries permitted until 2022-09-03 20:21:19.167156741 +0000 UTC m=+434.399364834 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367) since it's in attaching or detaching state"
I0903 20:21:18.362177       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:21:18.780806       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:21:18.938687       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:21:23.269819       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:21:23.519916       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="76.001µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:56514" resp=200
I0903 20:21:29.148437       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
... skipping 2 lines ...
I0903 20:21:30.060473       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367 was detached from node:capz-9buiac-mp-0000001
I0903 20:21:30.060495       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367") on node "capz-9buiac-mp-0000001" 
I0903 20:21:30.798316       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0903 20:21:33.134211       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:21:33.159525       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:21:33.159611       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" with version 1539
I0903 20:21:33.159702       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: phase: Failed, bound to: "azuredisk-1563/pvc-xvhpb (uid: 7a2f86d7-1954-4503-b15b-e8d6cc4da367)", boundByController: true
I0903 20:21:33.159738       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is bound to claim azuredisk-1563/pvc-xvhpb
I0903 20:21:33.159790       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: claim azuredisk-1563/pvc-xvhpb not found
I0903 20:21:33.159802       1 pv_controller.go:1108] reclaimVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: policy is Delete
I0903 20:21:33.159823       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]]
I0903 20:21:33.159898       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367] started
I0903 20:21:33.165947       1 pv_controller.go:1341] isVolumeReleased[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is released
... skipping 5 lines ...
I0903 20:21:38.429767       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367
I0903 20:21:38.429800       1 pv_controller.go:1436] volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" deleted
I0903 20:21:38.429812       1 pv_controller.go:1284] deleteVolumeOperation [pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: success
I0903 20:21:38.438506       1 pv_protection_controller.go:205] Got event on PV pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367
I0903 20:21:38.438541       1 pv_protection_controller.go:125] Processing PV pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367
I0903 20:21:38.439382       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" with version 1591
I0903 20:21:38.439425       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: phase: Failed, bound to: "azuredisk-1563/pvc-xvhpb (uid: 7a2f86d7-1954-4503-b15b-e8d6cc4da367)", boundByController: true
I0903 20:21:38.439449       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: volume is bound to claim azuredisk-1563/pvc-xvhpb
I0903 20:21:38.439468       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: claim azuredisk-1563/pvc-xvhpb not found
I0903 20:21:38.439542       1 pv_controller.go:1108] reclaimVolume[pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367]: policy is Delete
I0903 20:21:38.439607       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367[16812e6e-3f33-47a2-9e7e-9d162bf0a88f]]
I0903 20:21:38.439714       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367] started
I0903 20:21:38.445615       1 pv_controller.go:1244] Volume "pvc-7a2f86d7-1954-4503-b15b-e8d6cc4da367" is already being deleted
... skipping 116 lines ...
I0903 20:21:45.393418       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-2g4pd.17117341681965fe, uid 78c9698f-a22b-4bd9-95fd-69322c70b5dd, event type delete
I0903 20:21:45.396135       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-2g4pd.1711734245bb6eee, uid dca9285e-343c-440d-984a-c64045de4de5, event type delete
I0903 20:21:45.402691       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name pvc-xvhpb.1711733024ba6d6c, uid 9bd93424-4fe4-4d67-88a2-c604487e054b, event type delete
I0903 20:21:45.406261       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name pvc-xvhpb.17117330b54b71f4, uid 391bd26a-ab01-4238-99ed-b594f67b3826, event type delete
I0903 20:21:45.418574       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1563, name default-token-k9zz9, uid 7f2d9f28-848c-4d58-bad6-1a6f24e9d9a5, event type delete
I0903 20:21:45.431334       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1563, name kube-root-ca.crt, uid 8dffae5f-fa91-4c82-b231-f16cbfe509e0, event type delete
E0903 20:21:45.434020       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1563/default: secrets "default-token-ghlpp" is forbidden: unable to create new content in namespace azuredisk-1563 because it is being terminated
I0903 20:21:45.435297       1 publisher.go:181] Finished syncing namespace "azuredisk-1563" (3.778944ms)
I0903 20:21:45.449378       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1563/default), service account deleted, removing tokens
I0903 20:21:45.449681       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1563" (3.2µs)
I0903 20:21:45.449709       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1563, name default, uid dab16650-c112-4e7e-937f-7cde2cee2e7c, event type delete
I0903 20:21:45.544674       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1563" (2.9µs)
I0903 20:21:45.546198       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1563, estimate: 0, errors: <nil>
... skipping 183 lines ...
I0903 20:22:22.477756       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: claim azuredisk-7463/pvc-cdcph not found
I0903 20:22:22.477845       1 pv_controller.go:1108] reclaimVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: policy is Delete
I0903 20:22:22.477960       1 pv_controller.go:1753] scheduleOperation[delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]]
I0903 20:22:22.478049       1 pv_controller.go:1764] operation "delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]" is already running, skipping
I0903 20:22:22.480272       1 pv_controller.go:1341] isVolumeReleased[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is released
I0903 20:22:22.480288       1 pv_controller.go:1405] doDeleteVolume [pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]
I0903 20:22:22.527045       1 pv_controller.go:1260] deletion of volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:22:22.527069       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: set phase Failed
I0903 20:22:22.527080       1 pv_controller.go:858] updating PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: set phase Failed
I0903 20:22:22.531434       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" with version 1710
I0903 20:22:22.531470       1 pv_controller.go:879] volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" entered phase "Failed"
I0903 20:22:22.531481       1 pv_controller.go:901] volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:22:22.531732       1 pv_protection_controller.go:205] Got event on PV pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe
I0903 20:22:22.531808       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" with version 1710
I0903 20:22:22.531960       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: phase: Failed, bound to: "azuredisk-7463/pvc-cdcph (uid: 918449f7-5cad-4a6c-8e52-57d6c76a73fe)", boundByController: true
I0903 20:22:22.531992       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is bound to claim azuredisk-7463/pvc-cdcph
I0903 20:22:22.532014       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: claim azuredisk-7463/pvc-cdcph not found
I0903 20:22:22.532354       1 pv_controller.go:1108] reclaimVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: policy is Delete
I0903 20:22:22.532507       1 pv_controller.go:1753] scheduleOperation[delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]]
E0903 20:22:22.532098       1 goroutinemap.go:150] Operation for "delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]" failed. No retries permitted until 2022-09-03 20:22:23.031866715 +0000 UTC m=+498.264074608 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:22:22.532734       1 pv_controller.go:1766] operation "delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]" postponed due to exponential backoff
I0903 20:22:22.532262       1 event.go:291] "Event occurred" object="pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:22:23.520354       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="86.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43322" resp=200
I0903 20:22:23.541386       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0903 20:22:24.684493       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:22:24.685950       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe to the node "capz-9buiac-mp-0000001" mounted false
... skipping 8 lines ...
I0903 20:22:27.683004       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0903 20:22:28.370345       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:22:32.131290       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 28 items received
I0903 20:22:33.136011       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:22:33.162449       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:22:33.162544       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" with version 1710
I0903 20:22:33.162608       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: phase: Failed, bound to: "azuredisk-7463/pvc-cdcph (uid: 918449f7-5cad-4a6c-8e52-57d6c76a73fe)", boundByController: true
I0903 20:22:33.162642       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is bound to claim azuredisk-7463/pvc-cdcph
I0903 20:22:33.162664       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: claim azuredisk-7463/pvc-cdcph not found
I0903 20:22:33.162673       1 pv_controller.go:1108] reclaimVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: policy is Delete
I0903 20:22:33.162692       1 pv_controller.go:1753] scheduleOperation[delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]]
I0903 20:22:33.162742       1 pv_controller.go:1232] deleteVolumeOperation [pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe] started
I0903 20:22:33.168744       1 pv_controller.go:1341] isVolumeReleased[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is released
I0903 20:22:33.168767       1 pv_controller.go:1405] doDeleteVolume [pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]
I0903 20:22:33.168918       1 pv_controller.go:1260] deletion of volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe) since it's in attaching or detaching state
I0903 20:22:33.168938       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: set phase Failed
I0903 20:22:33.168949       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: phase Failed already set
E0903 20:22:33.169028       1 goroutinemap.go:150] Operation for "delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]" failed. No retries permitted until 2022-09-03 20:22:34.169000528 +0000 UTC m=+509.401208321 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe) since it's in attaching or detaching state"
I0903 20:22:33.520078       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="68.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44982" resp=200
I0903 20:22:37.142755       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 16 items received
I0903 20:22:38.152373       1 gc_controller.go:161] GC'ing orphaned
I0903 20:22:38.152435       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:22:38.221677       1 controller.go:272] Triggering nodeSync
I0903 20:22:38.221709       1 controller.go:291] nodeSync has been triggered
... skipping 9 lines ...
I0903 20:22:43.519438       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="76.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:41782" resp=200
I0903 20:22:45.121477       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 16 items received
I0903 20:22:48.128162       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:22:48.136324       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:22:48.162606       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:22:48.162665       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" with version 1710
I0903 20:22:48.162696       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: phase: Failed, bound to: "azuredisk-7463/pvc-cdcph (uid: 918449f7-5cad-4a6c-8e52-57d6c76a73fe)", boundByController: true
I0903 20:22:48.162736       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is bound to claim azuredisk-7463/pvc-cdcph
I0903 20:22:48.162751       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: claim azuredisk-7463/pvc-cdcph not found
I0903 20:22:48.162759       1 pv_controller.go:1108] reclaimVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: policy is Delete
I0903 20:22:48.162772       1 pv_controller.go:1753] scheduleOperation[delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]]
I0903 20:22:48.162795       1 pv_controller.go:1232] deleteVolumeOperation [pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe] started
I0903 20:22:48.171512       1 pv_controller.go:1341] isVolumeReleased[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is released
... skipping 4 lines ...
I0903 20:22:53.389770       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe
I0903 20:22:53.389809       1 pv_controller.go:1436] volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" deleted
I0903 20:22:53.389824       1 pv_controller.go:1284] deleteVolumeOperation [pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: success
I0903 20:22:53.397134       1 pv_protection_controller.go:205] Got event on PV pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe
I0903 20:22:53.397357       1 pv_protection_controller.go:125] Processing PV pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe
I0903 20:22:53.397469       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" with version 1758
I0903 20:22:53.397604       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: phase: Failed, bound to: "azuredisk-7463/pvc-cdcph (uid: 918449f7-5cad-4a6c-8e52-57d6c76a73fe)", boundByController: true
I0903 20:22:53.397718       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: volume is bound to claim azuredisk-7463/pvc-cdcph
I0903 20:22:53.397830       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: claim azuredisk-7463/pvc-cdcph not found
I0903 20:22:53.397922       1 pv_controller.go:1108] reclaimVolume[pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe]: policy is Delete
I0903 20:22:53.398019       1 pv_controller.go:1753] scheduleOperation[delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]]
I0903 20:22:53.398104       1 pv_controller.go:1764] operation "delete-pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe[c6798a18-067c-455a-a335-833c99579c51]" is already running, skipping
I0903 20:22:53.401771       1 pv_controller_base.go:235] volume "pvc-918449f7-5cad-4a6c-8e52-57d6c76a73fe" deleted
... skipping 133 lines ...
I0903 20:23:03.488836       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711" lun 0 to node "capz-9buiac-mp-0000000".
I0903 20:23:03.488883       1 azure_controller_vmss.go:101] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - attach disk(capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) with DiskEncryptionSetID()
I0903 20:23:03.523832       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="85.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48638" resp=200
I0903 20:23:03.741905       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7463
I0903 20:23:03.784064       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7463, name default-token-ljvr9, uid ef6fad86-1d0d-4af2-8cfc-5a9877e9470e, event type delete
I0903 20:23:03.798453       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.1711734b54343c39, uid 3891d162-6bd4-4994-814d-1a2768bc89e1, event type delete
E0903 20:23:03.800399       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7463/default: secrets "default-token-q2q9v" is forbidden: unable to create new content in namespace azuredisk-7463 because it is being terminated
I0903 20:23:03.801785       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.171173514129fb73, uid 854ce25b-86ab-4604-bfe8-fe2d70f289f6, event type delete
I0903 20:23:03.822532       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.171173532a2e17a3, uid 6b6dffef-eb39-43f2-9484-5aad3e10a324, event type delete
I0903 20:23:03.825433       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.171173532a2ea2b4, uid 22927b46-ab8b-4af5-8b61-303db2ee0e6a, event type delete
I0903 20:23:03.831424       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.171173534e51537a, uid 07f5658e-d4b3-466e-b724-bd4c46ff20d6, event type delete
I0903 20:23:03.833830       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.17117353506d32c9, uid 596f82d5-f40f-4f44-aecb-9b5001e0ac04, event type delete
I0903 20:23:03.836713       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-7lcdb.1711735356840591, uid 37c40af8-c904-417f-b3c2-90e0a3894789, event type delete
... skipping 135 lines ...
I0903 20:23:26.354371       1 pv_controller.go:1108] reclaimVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: policy is Delete
I0903 20:23:26.354379       1 pv_controller.go:1753] scheduleOperation[delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]]
I0903 20:23:26.354384       1 pv_controller.go:1764] operation "delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]" is already running, skipping
I0903 20:23:26.354406       1 pv_controller.go:1232] deleteVolumeOperation [pvc-03110910-388a-491c-9ab2-bc0e4e193711] started
I0903 20:23:26.355975       1 pv_controller.go:1341] isVolumeReleased[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is released
I0903 20:23:26.355992       1 pv_controller.go:1405] doDeleteVolume [pvc-03110910-388a-491c-9ab2-bc0e4e193711]
I0903 20:23:26.378839       1 pv_controller.go:1260] deletion of volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:23:26.378864       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: set phase Failed
I0903 20:23:26.378873       1 pv_controller.go:858] updating PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: set phase Failed
I0903 20:23:26.383993       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" with version 1862
I0903 20:23:26.384026       1 pv_controller.go:879] volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" entered phase "Failed"
I0903 20:23:26.384037       1 pv_controller.go:901] volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
E0903 20:23:26.384087       1 goroutinemap.go:150] Operation for "delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]" failed. No retries permitted until 2022-09-03 20:23:26.884059507 +0000 UTC m=+562.116267400 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:23:26.384315       1 event.go:291] "Event occurred" object="pvc-03110910-388a-491c-9ab2-bc0e4e193711" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:23:26.384443       1 pv_protection_controller.go:205] Got event on PV pvc-03110910-388a-491c-9ab2-bc0e4e193711
I0903 20:23:26.384469       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" with version 1862
I0903 20:23:26.384491       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase: Failed, bound to: "azuredisk-9241/pvc-h5tv9 (uid: 03110910-388a-491c-9ab2-bc0e4e193711)", boundByController: true
I0903 20:23:26.384514       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is bound to claim azuredisk-9241/pvc-h5tv9
I0903 20:23:26.384532       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: claim azuredisk-9241/pvc-h5tv9 not found
I0903 20:23:26.384540       1 pv_controller.go:1108] reclaimVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: policy is Delete
I0903 20:23:26.384554       1 pv_controller.go:1753] scheduleOperation[delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]]
I0903 20:23:26.384561       1 pv_controller.go:1766] operation "delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]" postponed due to exponential backoff
I0903 20:23:27.242159       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0903 20:23:28.380261       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 20:23:30.133303       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 7 items received
I0903 20:23:30.151640       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0903 20:23:33.137883       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:23:33.147320       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 17 items received
I0903 20:23:33.164530       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:23:33.164621       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" with version 1862
I0903 20:23:33.164655       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase: Failed, bound to: "azuredisk-9241/pvc-h5tv9 (uid: 03110910-388a-491c-9ab2-bc0e4e193711)", boundByController: true
I0903 20:23:33.164730       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is bound to claim azuredisk-9241/pvc-h5tv9
I0903 20:23:33.164755       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: claim azuredisk-9241/pvc-h5tv9 not found
I0903 20:23:33.164768       1 pv_controller.go:1108] reclaimVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: policy is Delete
I0903 20:23:33.164803       1 pv_controller.go:1753] scheduleOperation[delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]]
I0903 20:23:33.164841       1 pv_controller.go:1232] deleteVolumeOperation [pvc-03110910-388a-491c-9ab2-bc0e4e193711] started
I0903 20:23:33.168906       1 pv_controller.go:1341] isVolumeReleased[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is released
I0903 20:23:33.168924       1 pv_controller.go:1405] doDeleteVolume [pvc-03110910-388a-491c-9ab2-bc0e4e193711]
I0903 20:23:33.192852       1 pv_controller.go:1260] deletion of volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:23:33.192875       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: set phase Failed
I0903 20:23:33.192886       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase Failed already set
E0903 20:23:33.192942       1 goroutinemap.go:150] Operation for "delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]" failed. No retries permitted until 2022-09-03 20:23:34.192894511 +0000 UTC m=+569.425102304 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:23:33.520586       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="65.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36032" resp=200
I0903 20:23:34.737268       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:23:34.737303       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711 to the node "capz-9buiac-mp-0000000" mounted false
I0903 20:23:34.753059       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-9buiac-mp-0000000" succeeded. VolumesAttached: []
I0903 20:23:34.753697       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711") on node "capz-9buiac-mp-0000000" 
I0903 20:23:34.754725       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
... skipping 9 lines ...
I0903 20:23:43.520307       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="71.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:50702" resp=200
I0903 20:23:46.126390       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0903 20:23:48.129742       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:23:48.137964       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:23:48.165399       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:23:48.165582       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" with version 1862
I0903 20:23:48.165653       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase: Failed, bound to: "azuredisk-9241/pvc-h5tv9 (uid: 03110910-388a-491c-9ab2-bc0e4e193711)", boundByController: true
I0903 20:23:48.165695       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is bound to claim azuredisk-9241/pvc-h5tv9
I0903 20:23:48.165738       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: claim azuredisk-9241/pvc-h5tv9 not found
I0903 20:23:48.165757       1 pv_controller.go:1108] reclaimVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: policy is Delete
I0903 20:23:48.165774       1 pv_controller.go:1753] scheduleOperation[delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]]
I0903 20:23:48.165823       1 pv_controller.go:1232] deleteVolumeOperation [pvc-03110910-388a-491c-9ab2-bc0e4e193711] started
I0903 20:23:48.173996       1 pv_controller.go:1341] isVolumeReleased[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is released
I0903 20:23:48.174013       1 pv_controller.go:1405] doDeleteVolume [pvc-03110910-388a-491c-9ab2-bc0e4e193711]
I0903 20:23:48.174049       1 pv_controller.go:1260] deletion of volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) since it's in attaching or detaching state
I0903 20:23:48.174128       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: set phase Failed
I0903 20:23:48.174163       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase Failed already set
E0903 20:23:48.174236       1 goroutinemap.go:150] Operation for "delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]" failed. No retries permitted until 2022-09-03 20:23:50.174208045 +0000 UTC m=+585.406415838 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) since it's in attaching or detaching state"
I0903 20:23:48.885546       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:23:49.121051       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0903 20:23:53.520504       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:60828" resp=200
I0903 20:23:55.047957       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) returned with <nil>
I0903 20:23:55.048010       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711) succeeded
I0903 20:23:55.048022       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711 was detached from node:capz-9buiac-mp-0000000
... skipping 3 lines ...
I0903 20:23:58.155943       1 gc_controller.go:161] GC'ing orphaned
I0903 20:23:58.155981       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:24:03.138536       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:24:03.138955       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0903 20:24:03.166469       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:24:03.166551       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" with version 1862
I0903 20:24:03.166613       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase: Failed, bound to: "azuredisk-9241/pvc-h5tv9 (uid: 03110910-388a-491c-9ab2-bc0e4e193711)", boundByController: true
I0903 20:24:03.166655       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is bound to claim azuredisk-9241/pvc-h5tv9
I0903 20:24:03.166678       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: claim azuredisk-9241/pvc-h5tv9 not found
I0903 20:24:03.166690       1 pv_controller.go:1108] reclaimVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: policy is Delete
I0903 20:24:03.166707       1 pv_controller.go:1753] scheduleOperation[delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]]
I0903 20:24:03.166754       1 pv_controller.go:1232] deleteVolumeOperation [pvc-03110910-388a-491c-9ab2-bc0e4e193711] started
I0903 20:24:03.173002       1 pv_controller.go:1341] isVolumeReleased[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is released
... skipping 4 lines ...
I0903 20:24:08.376530       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-03110910-388a-491c-9ab2-bc0e4e193711
I0903 20:24:08.376565       1 pv_controller.go:1436] volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" deleted
I0903 20:24:08.376581       1 pv_controller.go:1284] deleteVolumeOperation [pvc-03110910-388a-491c-9ab2-bc0e4e193711]: success
I0903 20:24:08.385561       1 pv_protection_controller.go:205] Got event on PV pvc-03110910-388a-491c-9ab2-bc0e4e193711
I0903 20:24:08.385588       1 pv_protection_controller.go:125] Processing PV pvc-03110910-388a-491c-9ab2-bc0e4e193711
I0903 20:24:08.385841       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" with version 1925
I0903 20:24:08.385862       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: phase: Failed, bound to: "azuredisk-9241/pvc-h5tv9 (uid: 03110910-388a-491c-9ab2-bc0e4e193711)", boundByController: true
I0903 20:24:08.385888       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: volume is bound to claim azuredisk-9241/pvc-h5tv9
I0903 20:24:08.385901       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: claim azuredisk-9241/pvc-h5tv9 not found
I0903 20:24:08.385912       1 pv_controller.go:1108] reclaimVolume[pvc-03110910-388a-491c-9ab2-bc0e4e193711]: policy is Delete
I0903 20:24:08.385922       1 pv_controller.go:1753] scheduleOperation[delete-pvc-03110910-388a-491c-9ab2-bc0e4e193711[d6b1e06d-604e-442f-ba20-4f822610f88b]]
I0903 20:24:08.385938       1 pv_controller.go:1232] deleteVolumeOperation [pvc-03110910-388a-491c-9ab2-bc0e4e193711] started
I0903 20:24:08.391597       1 pv_controller.go:1244] Volume "pvc-03110910-388a-491c-9ab2-bc0e4e193711" is already being deleted
... skipping 814 lines ...
I0903 20:25:46.574131       1 pv_controller.go:1108] reclaimVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: policy is Delete
I0903 20:25:46.574178       1 pv_controller.go:1753] scheduleOperation[delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]]
I0903 20:25:46.574238       1 pv_controller.go:1764] operation "delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]" is already running, skipping
I0903 20:25:46.574352       1 pv_controller.go:1232] deleteVolumeOperation [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c] started
I0903 20:25:46.576136       1 pv_controller.go:1341] isVolumeReleased[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is released
I0903 20:25:46.576223       1 pv_controller.go:1405] doDeleteVolume [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]
I0903 20:25:46.603883       1 pv_controller.go:1260] deletion of volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:25:46.603933       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: set phase Failed
I0903 20:25:46.603955       1 pv_controller.go:858] updating PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: set phase Failed
I0903 20:25:46.608564       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" with version 2171
I0903 20:25:46.608596       1 pv_controller.go:879] volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" entered phase "Failed"
I0903 20:25:46.608605       1 pv_controller.go:901] volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
E0903 20:25:46.608656       1 goroutinemap.go:150] Operation for "delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]" failed. No retries permitted until 2022-09-03 20:25:47.108629783 +0000 UTC m=+702.340837676 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:25:46.609005       1 event.go:291] "Event occurred" object="pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:25:46.610385       1 pv_protection_controller.go:205] Got event on PV pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c
I0903 20:25:46.610419       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" with version 2171
I0903 20:25:46.610539       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase: Failed, bound to: "azuredisk-9336/pvc-djnjs (uid: 6f8af82c-e2bf-467a-bfcb-b9376b952d1c)", boundByController: true
I0903 20:25:46.610654       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is bound to claim azuredisk-9336/pvc-djnjs
I0903 20:25:46.610760       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: claim azuredisk-9336/pvc-djnjs not found
I0903 20:25:46.610853       1 pv_controller.go:1108] reclaimVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: policy is Delete
I0903 20:25:46.610873       1 pv_controller.go:1753] scheduleOperation[delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]]
I0903 20:25:46.610882       1 pv_controller.go:1766] operation "delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]" postponed due to exponential backoff
I0903 20:25:48.132431       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 11 lines ...
I0903 20:25:48.173070       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:25:48.173125       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 found: phase: Bound, bound to: "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2", bindCompleted: true, boundByController: true
I0903 20:25:48.173139       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: all is bound
I0903 20:25:48.173147       1 pv_controller.go:858] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: set phase Bound
I0903 20:25:48.173158       1 pv_controller.go:861] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase Bound already set
I0903 20:25:48.173171       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" with version 2171
I0903 20:25:48.173195       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase: Failed, bound to: "azuredisk-9336/pvc-djnjs (uid: 6f8af82c-e2bf-467a-bfcb-b9376b952d1c)", boundByController: true
I0903 20:25:48.173218       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is bound to claim azuredisk-9336/pvc-djnjs
I0903 20:25:48.173239       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: claim azuredisk-9336/pvc-djnjs not found
I0903 20:25:48.173247       1 pv_controller.go:1108] reclaimVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: policy is Delete
I0903 20:25:48.173262       1 pv_controller.go:1753] scheduleOperation[delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]]
I0903 20:25:48.173290       1 pv_controller.go:1232] deleteVolumeOperation [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c] started
I0903 20:25:48.173500       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-rlm5r" with version 1956
... skipping 27 lines ...
I0903 20:25:48.177716       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-xczk8] status: phase Bound already set
I0903 20:25:48.177732       1 pv_controller.go:1038] volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" bound to claim "azuredisk-9336/pvc-xczk8"
I0903 20:25:48.177758       1 pv_controller.go:1039] volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-xczk8 (uid: c9f74a87-b028-4e36-ad53-03e73a8d4cc2)", boundByController: true
I0903 20:25:48.177774       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-xczk8" status after binding: phase: Bound, bound to: "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2", bindCompleted: true, boundByController: true
I0903 20:25:48.180129       1 pv_controller.go:1341] isVolumeReleased[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is released
I0903 20:25:48.180146       1 pv_controller.go:1405] doDeleteVolume [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]
I0903 20:25:48.212138       1 pv_controller.go:1260] deletion of volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:25:48.212159       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: set phase Failed
I0903 20:25:48.212168       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase Failed already set
E0903 20:25:48.212210       1 goroutinemap.go:150] Operation for "delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]" failed. No retries permitted until 2022-09-03 20:25:49.21217766 +0000 UTC m=+704.444385453 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:25:48.963282       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:25:49.130580       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 0 items received
I0903 20:25:53.520468       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="116.402µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53678" resp=200
I0903 20:25:54.837151       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:25:54.837189       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c to the node "capz-9buiac-mp-0000001" mounted false
I0903 20:25:54.837198       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a to the node "capz-9buiac-mp-0000001" mounted true
... skipping 61 lines ...
I0903 20:26:03.173976       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:26:03.173996       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 found: phase: Bound, bound to: "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2", bindCompleted: true, boundByController: true
I0903 20:26:03.174096       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: all is bound
I0903 20:26:03.174107       1 pv_controller.go:858] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: set phase Bound
I0903 20:26:03.174116       1 pv_controller.go:861] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase Bound already set
I0903 20:26:03.174129       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" with version 2171
I0903 20:26:03.174167       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase: Failed, bound to: "azuredisk-9336/pvc-djnjs (uid: 6f8af82c-e2bf-467a-bfcb-b9376b952d1c)", boundByController: true
I0903 20:26:03.174189       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is bound to claim azuredisk-9336/pvc-djnjs
I0903 20:26:03.174226       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: claim azuredisk-9336/pvc-djnjs not found
I0903 20:26:03.174236       1 pv_controller.go:1108] reclaimVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: policy is Delete
I0903 20:26:03.174270       1 pv_controller.go:1753] scheduleOperation[delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]]
I0903 20:26:03.174343       1 pv_controller.go:1232] deleteVolumeOperation [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c] started
I0903 20:26:03.178535       1 pv_controller.go:1341] isVolumeReleased[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is released
I0903 20:26:03.178555       1 pv_controller.go:1405] doDeleteVolume [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]
I0903 20:26:03.178611       1 pv_controller.go:1260] deletion of volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) since it's in attaching or detaching state
I0903 20:26:03.178638       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: set phase Failed
I0903 20:26:03.178646       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase Failed already set
E0903 20:26:03.178724       1 goroutinemap.go:150] Operation for "delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]" failed. No retries permitted until 2022-09-03 20:26:05.178683872 +0000 UTC m=+720.410891765 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) since it's in attaching or detaching state"
I0903 20:26:03.527601       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="137.202µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:47914" resp=200
I0903 20:26:10.176417       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) returned with <nil>
I0903 20:26:10.176471       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c) succeeded
I0903 20:26:10.176492       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c was detached from node:capz-9buiac-mp-0000001
I0903 20:26:10.176514       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c") on node "capz-9buiac-mp-0000001" 
I0903 20:26:11.937213       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 16 items received
... skipping 17 lines ...
I0903 20:26:18.173025       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:26:18.173040       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 found: phase: Bound, bound to: "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2", bindCompleted: true, boundByController: true
I0903 20:26:18.173071       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: all is bound
I0903 20:26:18.173083       1 pv_controller.go:858] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: set phase Bound
I0903 20:26:18.173092       1 pv_controller.go:861] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase Bound already set
I0903 20:26:18.173104       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" with version 2171
I0903 20:26:18.173121       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase: Failed, bound to: "azuredisk-9336/pvc-djnjs (uid: 6f8af82c-e2bf-467a-bfcb-b9376b952d1c)", boundByController: true
I0903 20:26:18.173160       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is bound to claim azuredisk-9336/pvc-djnjs
I0903 20:26:18.173178       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: claim azuredisk-9336/pvc-djnjs not found
I0903 20:26:18.173186       1 pv_controller.go:1108] reclaimVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: policy is Delete
I0903 20:26:18.173204       1 pv_controller.go:1753] scheduleOperation[delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]]
I0903 20:26:18.173253       1 pv_controller.go:1232] deleteVolumeOperation [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c] started
I0903 20:26:18.173352       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-rlm5r" with version 1956
... skipping 34 lines ...
I0903 20:26:23.368014       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c
I0903 20:26:23.368047       1 pv_controller.go:1436] volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" deleted
I0903 20:26:23.368061       1 pv_controller.go:1284] deleteVolumeOperation [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: success
I0903 20:26:23.374381       1 pv_protection_controller.go:205] Got event on PV pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c
I0903 20:26:23.374411       1 pv_protection_controller.go:125] Processing PV pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c
I0903 20:26:23.374880       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" with version 2225
I0903 20:26:23.374916       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: phase: Failed, bound to: "azuredisk-9336/pvc-djnjs (uid: 6f8af82c-e2bf-467a-bfcb-b9376b952d1c)", boundByController: true
I0903 20:26:23.374970       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: volume is bound to claim azuredisk-9336/pvc-djnjs
I0903 20:26:23.375129       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: claim azuredisk-9336/pvc-djnjs not found
I0903 20:26:23.375209       1 pv_controller.go:1108] reclaimVolume[pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c]: policy is Delete
I0903 20:26:23.375233       1 pv_controller.go:1753] scheduleOperation[delete-pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c[fb0405b1-a82e-4454-8e08-44deb32e9b06]]
I0903 20:26:23.375299       1 pv_controller.go:1232] deleteVolumeOperation [pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c] started
I0903 20:26:23.379380       1 pv_controller.go:1244] Volume "pvc-6f8af82c-e2bf-467a-bfcb-b9376b952d1c" is already being deleted
... skipping 240 lines ...
I0903 20:27:04.348725       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2] started
I0903 20:27:04.349078       1 pv_controller.go:1108] reclaimVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: policy is Delete
I0903 20:27:04.349238       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]]
I0903 20:27:04.349317       1 pv_controller.go:1764] operation "delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]" is already running, skipping
I0903 20:27:04.352722       1 pv_controller.go:1341] isVolumeReleased[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is released
I0903 20:27:04.352738       1 pv_controller.go:1405] doDeleteVolume [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]
I0903 20:27:04.402320       1 pv_controller.go:1260] deletion of volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:27:04.402347       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: set phase Failed
I0903 20:27:04.402357       1 pv_controller.go:858] updating PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: set phase Failed
I0903 20:27:04.406801       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" with version 2298
I0903 20:27:04.406836       1 pv_controller.go:879] volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" entered phase "Failed"
I0903 20:27:04.406847       1 pv_controller.go:901] volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
E0903 20:27:04.406898       1 goroutinemap.go:150] Operation for "delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]" failed. No retries permitted until 2022-09-03 20:27:04.906870653 +0000 UTC m=+780.139078546 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:27:04.407155       1 event.go:291] "Event occurred" object="pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:27:04.407186       1 pv_protection_controller.go:205] Got event on PV pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2
I0903 20:27:04.407215       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" with version 2298
I0903 20:27:04.407239       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase: Failed, bound to: "azuredisk-9336/pvc-xczk8 (uid: c9f74a87-b028-4e36-ad53-03e73a8d4cc2)", boundByController: true
I0903 20:27:04.407268       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:27:04.407287       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 not found
I0903 20:27:04.407297       1 pv_controller.go:1108] reclaimVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: policy is Delete
I0903 20:27:04.407312       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]]
I0903 20:27:04.407321       1 pv_controller.go:1766] operation "delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]" postponed due to exponential backoff
I0903 20:27:04.857761       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
... skipping 21 lines ...
I0903 20:27:18.175365       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is bound to claim azuredisk-9336/pvc-rlm5r
I0903 20:27:18.175381       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r found: phase: Bound, bound to: "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a", bindCompleted: true, boundByController: true
I0903 20:27:18.175394       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: all is bound
I0903 20:27:18.175403       1 pv_controller.go:858] updating PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: set phase Bound
I0903 20:27:18.175411       1 pv_controller.go:861] updating PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase Bound already set
I0903 20:27:18.175421       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" with version 2298
I0903 20:27:18.175437       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase: Failed, bound to: "azuredisk-9336/pvc-xczk8 (uid: c9f74a87-b028-4e36-ad53-03e73a8d4cc2)", boundByController: true
I0903 20:27:18.175475       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:27:18.175493       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 not found
I0903 20:27:18.175500       1 pv_controller.go:1108] reclaimVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: policy is Delete
I0903 20:27:18.175515       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]]
I0903 20:27:18.175540       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2] started
I0903 20:27:18.175779       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-rlm5r" with version 1956
... skipping 11 lines ...
I0903 20:27:18.175937       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-rlm5r] status: phase Bound already set
I0903 20:27:18.175949       1 pv_controller.go:1038] volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" bound to claim "azuredisk-9336/pvc-rlm5r"
I0903 20:27:18.175964       1 pv_controller.go:1039] volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-rlm5r (uid: 4e82e9a3-4cec-475d-ba56-acc33dd6431a)", boundByController: true
I0903 20:27:18.175978       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-rlm5r" status after binding: phase: Bound, bound to: "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a", bindCompleted: true, boundByController: true
I0903 20:27:18.194844       1 pv_controller.go:1341] isVolumeReleased[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is released
I0903 20:27:18.194862       1 pv_controller.go:1405] doDeleteVolume [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]
I0903 20:27:18.194898       1 pv_controller.go:1260] deletion of volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) since it's in attaching or detaching state
I0903 20:27:18.194915       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: set phase Failed
I0903 20:27:18.194960       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase Failed already set
E0903 20:27:18.194997       1 goroutinemap.go:150] Operation for "delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]" failed. No retries permitted until 2022-09-03 20:27:19.194971706 +0000 UTC m=+794.427179599 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) since it's in attaching or detaching state"
I0903 20:27:19.012038       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:27:20.166008       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) returned with <nil>
I0903 20:27:20.166071       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2) succeeded
I0903 20:27:20.166083       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2 was detached from node:capz-9buiac-mp-0000000
I0903 20:27:20.166105       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2") on node "capz-9buiac-mp-0000000" 
I0903 20:27:23.519867       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="70.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53822" resp=200
... skipping 7 lines ...
I0903 20:27:33.175825       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is bound to claim azuredisk-9336/pvc-rlm5r
I0903 20:27:33.175845       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r found: phase: Bound, bound to: "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a", bindCompleted: true, boundByController: true
I0903 20:27:33.175882       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: all is bound
I0903 20:27:33.175898       1 pv_controller.go:858] updating PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: set phase Bound
I0903 20:27:33.175909       1 pv_controller.go:861] updating PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase Bound already set
I0903 20:27:33.175925       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" with version 2298
I0903 20:27:33.175977       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase: Failed, bound to: "azuredisk-9336/pvc-xczk8 (uid: c9f74a87-b028-4e36-ad53-03e73a8d4cc2)", boundByController: true
I0903 20:27:33.176000       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:27:33.176019       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 not found
I0903 20:27:33.176028       1 pv_controller.go:1108] reclaimVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: policy is Delete
I0903 20:27:33.176068       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]]
I0903 20:27:33.176098       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2] started
I0903 20:27:33.175674       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-rlm5r" with version 1956
... skipping 25 lines ...
I0903 20:27:38.358160       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2
I0903 20:27:38.358193       1 pv_controller.go:1436] volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" deleted
I0903 20:27:38.358206       1 pv_controller.go:1284] deleteVolumeOperation [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: success
I0903 20:27:38.368464       1 pv_protection_controller.go:205] Got event on PV pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2
I0903 20:27:38.368500       1 pv_protection_controller.go:125] Processing PV pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2
I0903 20:27:38.368693       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" with version 2349
I0903 20:27:38.368773       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: phase: Failed, bound to: "azuredisk-9336/pvc-xczk8 (uid: c9f74a87-b028-4e36-ad53-03e73a8d4cc2)", boundByController: true
I0903 20:27:38.368805       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: volume is bound to claim azuredisk-9336/pvc-xczk8
I0903 20:27:38.368824       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: claim azuredisk-9336/pvc-xczk8 not found
I0903 20:27:38.368832       1 pv_controller.go:1108] reclaimVolume[pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2]: policy is Delete
I0903 20:27:38.368848       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2[e3881db3-fdef-401a-b40f-4354ba6b35af]]
I0903 20:27:38.368876       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2] started
I0903 20:27:38.373619       1 pv_controller.go:1244] Volume "pvc-c9f74a87-b028-4e36-ad53-03e73a8d4cc2" is already being deleted
... skipping 179 lines ...
I0903 20:28:24.419357       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r not found
I0903 20:28:24.419367       1 pv_controller.go:1108] reclaimVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: policy is Delete
I0903 20:28:24.419380       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]]
I0903 20:28:24.419387       1 pv_controller.go:1764] operation "delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]" is already running, skipping
I0903 20:28:24.423736       1 pv_controller.go:1341] isVolumeReleased[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is released
I0903 20:28:24.423751       1 pv_controller.go:1405] doDeleteVolume [pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]
I0903 20:28:24.465861       1 pv_controller.go:1260] deletion of volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:28:24.465887       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: set phase Failed
I0903 20:28:24.465898       1 pv_controller.go:858] updating PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: set phase Failed
I0903 20:28:24.469701       1 pv_protection_controller.go:205] Got event on PV pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a
I0903 20:28:24.469945       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" with version 2429
I0903 20:28:24.470139       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase: Failed, bound to: "azuredisk-9336/pvc-rlm5r (uid: 4e82e9a3-4cec-475d-ba56-acc33dd6431a)", boundByController: true
I0903 20:28:24.470397       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is bound to claim azuredisk-9336/pvc-rlm5r
I0903 20:28:24.470520       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r not found
I0903 20:28:24.470712       1 pv_controller.go:1108] reclaimVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: policy is Delete
I0903 20:28:24.470817       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]]
I0903 20:28:24.470960       1 pv_controller.go:1764] operation "delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]" is already running, skipping
I0903 20:28:24.471389       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" with version 2429
I0903 20:28:24.471422       1 pv_controller.go:879] volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" entered phase "Failed"
I0903 20:28:24.471550       1 pv_controller.go:901] volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:28:24.471929       1 event.go:291] "Event occurred" object="pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
E0903 20:28:24.472116       1 goroutinemap.go:150] Operation for "delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]" failed. No retries permitted until 2022-09-03 20:28:24.971607994 +0000 UTC m=+860.203815787 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:28:24.944405       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:28:24.944442       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a to the node "capz-9buiac-mp-0000001" mounted false
I0903 20:28:25.031230       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:28:25.031272       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a to the node "capz-9buiac-mp-0000001" mounted false
I0903 20:28:25.033347       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-9buiac-mp-0000001" succeeded. VolumesAttached: []
I0903 20:28:25.033652       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a") on node "capz-9buiac-mp-0000001" 
... skipping 4 lines ...
I0903 20:28:27.548812       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-letzef" (13.101µs)
I0903 20:28:28.427340       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:28:31.946163       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:28:33.148470       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:28:33.178367       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:28:33.178467       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" with version 2429
I0903 20:28:33.178582       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase: Failed, bound to: "azuredisk-9336/pvc-rlm5r (uid: 4e82e9a3-4cec-475d-ba56-acc33dd6431a)", boundByController: true
I0903 20:28:33.178688       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is bound to claim azuredisk-9336/pvc-rlm5r
I0903 20:28:33.178773       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r not found
I0903 20:28:33.178820       1 pv_controller.go:1108] reclaimVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: policy is Delete
I0903 20:28:33.178841       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]]
I0903 20:28:33.178873       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a] started
I0903 20:28:33.191805       1 pv_controller.go:1341] isVolumeReleased[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is released
I0903 20:28:33.191826       1 pv_controller.go:1405] doDeleteVolume [pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]
I0903 20:28:33.191864       1 pv_controller.go:1260] deletion of volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) since it's in attaching or detaching state
I0903 20:28:33.191877       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: set phase Failed
I0903 20:28:33.191887       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase Failed already set
E0903 20:28:33.191920       1 goroutinemap.go:150] Operation for "delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]" failed. No retries permitted until 2022-09-03 20:28:34.191896695 +0000 UTC m=+869.424104588 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) since it's in attaching or detaching state"
I0903 20:28:33.519756       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="78.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:49856" resp=200
I0903 20:28:38.165572       1 gc_controller.go:161] GC'ing orphaned
I0903 20:28:38.165607       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:28:40.318422       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) returned with <nil>
I0903 20:28:40.318477       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a) succeeded
I0903 20:28:40.318488       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a was detached from node:capz-9buiac-mp-0000001
... skipping 2 lines ...
I0903 20:28:42.631590       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 23 items received
I0903 20:28:43.519808       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="85.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44054" resp=200
I0903 20:28:48.136890       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:28:48.149096       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:28:48.178919       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:28:48.179078       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" with version 2429
I0903 20:28:48.179205       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase: Failed, bound to: "azuredisk-9336/pvc-rlm5r (uid: 4e82e9a3-4cec-475d-ba56-acc33dd6431a)", boundByController: true
I0903 20:28:48.179250       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is bound to claim azuredisk-9336/pvc-rlm5r
I0903 20:28:48.179275       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r not found
I0903 20:28:48.179283       1 pv_controller.go:1108] reclaimVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: policy is Delete
I0903 20:28:48.179301       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]]
I0903 20:28:48.179343       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a] started
I0903 20:28:48.186557       1 pv_controller.go:1341] isVolumeReleased[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is released
... skipping 3 lines ...
I0903 20:28:53.453421       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a
I0903 20:28:53.453463       1 pv_controller.go:1436] volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" deleted
I0903 20:28:53.453477       1 pv_controller.go:1284] deleteVolumeOperation [pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: success
I0903 20:28:53.463648       1 pv_protection_controller.go:205] Got event on PV pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a
I0903 20:28:53.463685       1 pv_protection_controller.go:125] Processing PV pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a
I0903 20:28:53.463903       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" with version 2476
I0903 20:28:53.464056       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: phase: Failed, bound to: "azuredisk-9336/pvc-rlm5r (uid: 4e82e9a3-4cec-475d-ba56-acc33dd6431a)", boundByController: true
I0903 20:28:53.464158       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: volume is bound to claim azuredisk-9336/pvc-rlm5r
I0903 20:28:53.464220       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: claim azuredisk-9336/pvc-rlm5r not found
I0903 20:28:53.464237       1 pv_controller.go:1108] reclaimVolume[pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a]: policy is Delete
I0903 20:28:53.464303       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a[e4672ec3-5fb6-4f53-ae4e-1c22b85aed7d]]
I0903 20:28:53.464355       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a] started
I0903 20:28:53.472936       1 pv_controller.go:1244] Volume "pvc-4e82e9a3-4cec-475d-ba56-acc33dd6431a" is already being deleted
... skipping 58 lines ...
I0903 20:28:57.520422       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-2205/pvc-knpjz]: no volume found
I0903 20:28:57.520434       1 pv_controller.go:1446] provisionClaim[azuredisk-2205/pvc-knpjz]: started
I0903 20:28:57.520447       1 pv_controller.go:1753] scheduleOperation[provision-azuredisk-2205/pvc-knpjz[c63b5f69-2521-455c-89d6-29afffb92b2e]]
I0903 20:28:57.520453       1 pv_controller.go:1764] operation "provision-azuredisk-2205/pvc-knpjz[c63b5f69-2521-455c-89d6-29afffb92b2e]" is already running, skipping
I0903 20:28:57.520692       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2205/pvc-knpjz" with version 2502
I0903 20:28:57.520951       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" duration="43.811943ms"
I0903 20:28:57.520987       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-fgmt9\": the object has been modified; please apply your changes to the latest version and try again"
I0903 20:28:57.521145       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" startTime="2022-09-03 20:28:57.521121555 +0000 UTC m=+892.753329348"
I0903 20:28:57.525268       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e StorageAccountType:StandardSSD_LRS Size:10
I0903 20:28:57.525801       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9"
I0903 20:28:57.526011       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" duration="4.87616ms"
I0903 20:28:57.526052       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" startTime="2022-09-03 20:28:57.526030116 +0000 UTC m=+892.758238009"
I0903 20:28:57.526529       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-fgmt9" timed out (false) [last progress check: 2022-09-03 20:28:57 +0000 UTC - now: 2022-09-03 20:28:57.526522122 +0000 UTC m=+892.758729915]
... skipping 273 lines ...
I0903 20:29:23.581682       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" duration="540.006µs"
I0903 20:29:23.583825       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 20:29:23.585990       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954" (10.128824ms)
I0903 20:29:23.586022       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954", timestamp:time.Time{wall:0xc0bd0c28dfc38720, ext:918765115701, loc:(*time.Location)(0x731ea80)}}
I0903 20:29:23.586154       1 controller_utils.go:972] Ignoring inactive pod azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954-crl8l in state Running, deletion time 2022-09-03 20:29:53 +0000 UTC
I0903 20:29:23.586269       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954" (250.303µs)
W0903 20:29:23.595904       1 reconciler.go:385] Multi-Attach error for volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e") from node "capz-9buiac-mp-0000001" Volume is already used by pods azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954-crl8l on node capz-9buiac-mp-0000000
I0903 20:29:23.596088       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-fgmt9-5ddfb5d954-jw2d4" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-c63b5f69-2521-455c-89d6-29afffb92b2e\" Volume is already used by pod(s) azuredisk-volume-tester-fgmt9-5ddfb5d954-crl8l"
I0903 20:29:24.984111       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:29:28.000575       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-w1cekx" (20.1µs)
I0903 20:29:28.000614       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-rtws8h" (6.1µs)
I0903 20:29:28.436361       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:29:30.127539       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0903 20:29:33.150992       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 574 lines ...
I0903 20:32:24.429291       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: claim azuredisk-2205/pvc-knpjz not found
I0903 20:32:24.429417       1 pv_controller.go:1108] reclaimVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: policy is Delete
I0903 20:32:24.429534       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]]
I0903 20:32:24.429652       1 pv_controller.go:1764] operation "delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]" is already running, skipping
I0903 20:32:24.430683       1 pv_controller.go:1341] isVolumeReleased[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is released
I0903 20:32:24.430702       1 pv_controller.go:1405] doDeleteVolume [pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]
I0903 20:32:24.456016       1 pv_controller.go:1260] deletion of volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
I0903 20:32:24.456038       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: set phase Failed
I0903 20:32:24.456047       1 pv_controller.go:858] updating PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: set phase Failed
I0903 20:32:24.459665       1 pv_protection_controller.go:205] Got event on PV pvc-c63b5f69-2521-455c-89d6-29afffb92b2e
I0903 20:32:24.459699       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" with version 2893
I0903 20:32:24.459752       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: phase: Failed, bound to: "azuredisk-2205/pvc-knpjz (uid: c63b5f69-2521-455c-89d6-29afffb92b2e)", boundByController: true
I0903 20:32:24.459776       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is bound to claim azuredisk-2205/pvc-knpjz
I0903 20:32:24.459834       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: claim azuredisk-2205/pvc-knpjz not found
I0903 20:32:24.459843       1 pv_controller.go:1108] reclaimVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: policy is Delete
I0903 20:32:24.459856       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]]
I0903 20:32:24.459863       1 pv_controller.go:1764] operation "delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]" is already running, skipping
I0903 20:32:24.459930       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" with version 2893
I0903 20:32:24.460145       1 pv_controller.go:879] volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" entered phase "Failed"
I0903 20:32:24.460246       1 pv_controller.go:901] volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted
E0903 20:32:24.460455       1 goroutinemap.go:150] Operation for "delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]" failed. No retries permitted until 2022-09-03 20:32:24.960408557 +0000 UTC m=+1100.192616350 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:32:24.460608       1 event.go:291] "Event occurred" object="pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_1), could not be deleted"
I0903 20:32:25.072495       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:32:25.073931       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e to the node "capz-9buiac-mp-0000001" mounted false
I0903 20:32:25.163595       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000001"
I0903 20:32:25.163823       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e to the node "capz-9buiac-mp-0000001" mounted false
I0903 20:32:25.164266       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-9buiac-mp-0000001" succeeded. VolumesAttached: []
... skipping 3 lines ...
I0903 20:32:25.187806       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e"
I0903 20:32:25.187822       1 azure_controller_vmss.go:175] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e)
I0903 20:32:28.466703       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000001 ReadyCondition updated. Updating timestamp.
I0903 20:32:33.159536       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:32:33.192149       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:32:33.192241       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" with version 2893
I0903 20:32:33.192304       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: phase: Failed, bound to: "azuredisk-2205/pvc-knpjz (uid: c63b5f69-2521-455c-89d6-29afffb92b2e)", boundByController: true
I0903 20:32:33.192349       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is bound to claim azuredisk-2205/pvc-knpjz
I0903 20:32:33.192375       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: claim azuredisk-2205/pvc-knpjz not found
I0903 20:32:33.192389       1 pv_controller.go:1108] reclaimVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: policy is Delete
I0903 20:32:33.192411       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]]
I0903 20:32:33.192477       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c63b5f69-2521-455c-89d6-29afffb92b2e] started
I0903 20:32:33.206226       1 pv_controller.go:1341] isVolumeReleased[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is released
I0903 20:32:33.206269       1 pv_controller.go:1405] doDeleteVolume [pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]
I0903 20:32:33.206307       1 pv_controller.go:1260] deletion of volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e) since it's in attaching or detaching state
I0903 20:32:33.206323       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: set phase Failed
I0903 20:32:33.206333       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: phase Failed already set
E0903 20:32:33.206366       1 goroutinemap.go:150] Operation for "delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]" failed. No retries permitted until 2022-09-03 20:32:34.206341973 +0000 UTC m=+1109.438549766 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e) since it's in attaching or detaching state"
I0903 20:32:33.520276       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="89.501µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:51530" resp=200
I0903 20:32:36.413150       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0903 20:32:36.472690       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 20:32:37.128618       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 9 items received
I0903 20:32:38.153469       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0903 20:32:38.174210       1 gc_controller.go:161] GC'ing orphaned
... skipping 11 lines ...
I0903 20:32:43.520283       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="77.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:60348" resp=200
I0903 20:32:47.631927       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 8 items received
I0903 20:32:48.141331       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:32:48.160546       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:32:48.192297       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:32:48.192428       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" with version 2893
I0903 20:32:48.192468       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: phase: Failed, bound to: "azuredisk-2205/pvc-knpjz (uid: c63b5f69-2521-455c-89d6-29afffb92b2e)", boundByController: true
I0903 20:32:48.192505       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is bound to claim azuredisk-2205/pvc-knpjz
I0903 20:32:48.192522       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: claim azuredisk-2205/pvc-knpjz not found
I0903 20:32:48.192530       1 pv_controller.go:1108] reclaimVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: policy is Delete
I0903 20:32:48.192550       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]]
I0903 20:32:48.192582       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c63b5f69-2521-455c-89d6-29afffb92b2e] started
I0903 20:32:48.200127       1 pv_controller.go:1341] isVolumeReleased[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is released
... skipping 3 lines ...
I0903 20:32:53.413400       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e
I0903 20:32:53.413440       1 pv_controller.go:1436] volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" deleted
I0903 20:32:53.413455       1 pv_controller.go:1284] deleteVolumeOperation [pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: success
I0903 20:32:53.423895       1 pv_protection_controller.go:205] Got event on PV pvc-c63b5f69-2521-455c-89d6-29afffb92b2e
I0903 20:32:53.423935       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" with version 2939
I0903 20:32:53.424140       1 pv_protection_controller.go:125] Processing PV pvc-c63b5f69-2521-455c-89d6-29afffb92b2e
I0903 20:32:53.424191       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: phase: Failed, bound to: "azuredisk-2205/pvc-knpjz (uid: c63b5f69-2521-455c-89d6-29afffb92b2e)", boundByController: true
I0903 20:32:53.425215       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: volume is bound to claim azuredisk-2205/pvc-knpjz
I0903 20:32:53.425311       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: claim azuredisk-2205/pvc-knpjz not found
I0903 20:32:53.425387       1 pv_controller.go:1108] reclaimVolume[pvc-c63b5f69-2521-455c-89d6-29afffb92b2e]: policy is Delete
I0903 20:32:53.425521       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c63b5f69-2521-455c-89d6-29afffb92b2e[389ab3e1-d888-4356-be32-84828195b465]]
I0903 20:32:53.425679       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c63b5f69-2521-455c-89d6-29afffb92b2e] started
I0903 20:32:53.431733       1 pv_controller.go:1244] Volume "pvc-c63b5f69-2521-455c-89d6-29afffb92b2e" is already being deleted
... skipping 104 lines ...
I0903 20:33:01.193443       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-fgmt9-5ddfb5d954.171173b00fdccb4b, uid 8c2735f8-b2f9-4cf5-8366-3115ffc021aa, event type delete
I0903 20:33:01.197434       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-fgmt9-5ddfb5d954.171173b6211998dd, uid bb3373e4-e709-4de7-8f97-be0b2be4766d, event type delete
I0903 20:33:01.202998       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-fgmt9.171173b00f521067, uid ae802113-9084-4167-9a4b-b361121ab526, event type delete
I0903 20:33:01.206639       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name pvc-knpjz.171173b0085105d3, uid 11a05f95-5972-4a59-98b3-2bc7b1abf29c, event type delete
I0903 20:33:01.210972       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name pvc-knpjz.171173b09a143667, uid a49200d7-9985-472f-a076-58af3b56251b, event type delete
I0903 20:33:01.245427       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2205, name default-token-bb789, uid 1d35575a-35d5-45ba-a31f-b96de6aaa481, event type delete
E0903 20:33:01.257133       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2205/default: secrets "default-token-vpvxl" is forbidden: unable to create new content in namespace azuredisk-2205 because it is being terminated
I0903 20:33:01.299273       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2205/default), service account deleted, removing tokens
I0903 20:33:01.300492       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2205" (3µs)
I0903 20:33:01.300930       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2205, name default, uid 4c97a127-8643-43c7-aec8-1c9fb13a5827, event type delete
I0903 20:33:01.308378       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2205" (2.501µs)
I0903 20:33:01.308559       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2205, estimate: 0, errors: <nil>
I0903 20:33:01.317737       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2205" (253.876848ms)
... skipping 91 lines ...
I0903 20:33:17.200595       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (6.877687ms)
I0903 20:33:17.205697       1 publisher.go:181] Finished syncing namespace "azuredisk-1387" (11.681848ms)
I0903 20:33:17.785980       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0903 20:33:17.818938       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid eeb2f01a-351d-4f34-a23d-d7efc40e4b95, event type delete
I0903 20:33:17.821412       1 publisher.go:181] Finished syncing namespace "azuredisk-3410" (2.29533ms)
I0903 20:33:17.853866       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-9nvsc, uid a784ea6d-4ddf-4295-8fab-16c7b7e65ed3, event type delete
E0903 20:33:17.865859       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-h9dq9" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0903 20:33:17.899300       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-9qh7q.171173e87a536b8a, uid f9398ab0-029e-4f33-ab15-80c8d2ade139, event type delete
I0903 20:33:17.917868       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0903 20:33:17.917969       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (2.4µs)
I0903 20:33:17.917995       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid 2cf9d9a5-6178-4d3c-9881-5c5677cdae9a, event type delete
I0903 20:33:17.947371       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3410, estimate: 0, errors: <nil>
I0903 20:33:17.947757       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (3µs)
... skipping 88 lines ...
I0903 20:33:19.007663       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837 StorageAccountType:StandardSSD_LRS Size:10
I0903 20:33:19.226740       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0903 20:33:19.231379       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:33:19.253068       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-k2x7k, uid d9d07dd7-f03c-48e0-9448-b05582aad028, event type delete
I0903 20:33:19.263724       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (2.6µs)
I0903 20:33:19.264871       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8582, name default, uid 3da433ad-f2df-460b-a803-212b71775138, event type delete
E0903 20:33:19.265722       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-lq7wd" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0903 20:33:19.266303       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8582/default), service account deleted, removing tokens
I0903 20:33:19.282906       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8582, name kube-root-ca.crt, uid ba5dafb8-4a33-46be-b4cf-a75ba3e3e2f9, event type delete
I0903 20:33:19.284615       1 publisher.go:181] Finished syncing namespace "azuredisk-8582" (1.665521ms)
I0903 20:33:19.387194       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (3.9µs)
I0903 20:33:19.387735       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8582, estimate: 0, errors: <nil>
I0903 20:33:19.395804       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (182.649319ms)
I0903 20:33:20.641409       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0903 20:33:20.703023       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 5091fcf2-491f-424c-a387-466d759bca33, event type delete
I0903 20:33:20.707492       1 publisher.go:181] Finished syncing namespace "azuredisk-7726" (4.345255ms)
I0903 20:33:20.719285       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-2zqbw, uid 21b5cddc-eff0-4e00-b33f-a8b747b1d3c0, event type delete
E0903 20:33:20.733983       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-mgr4j" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0903 20:33:20.766725       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0903 20:33:20.767066       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 350ff2ab-4c14-448a-81a3-05129e5f33ad, event type delete
I0903 20:33:20.767123       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (1.9µs)
I0903 20:33:20.781443       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (2.9µs)
I0903 20:33:20.783400       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
I0903 20:33:20.791982       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7726" (156.139082ms)
... skipping 184 lines ...
I0903 20:33:22.106623       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837" to node "capz-9buiac-mp-0000000".
I0903 20:33:22.106870       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565" to node "capz-9buiac-mp-0000000".
I0903 20:33:22.107130       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" to node "capz-9buiac-mp-0000000".
I0903 20:33:22.134160       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565" lun 0 to node "capz-9buiac-mp-0000000".
I0903 20:33:22.134208       1 azure_controller_vmss.go:101] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - attach disk(capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565) with DiskEncryptionSetID()
I0903 20:33:22.162494       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-vxzql, uid 4eeeee47-72b5-4ed1-ba98-843948b977b9, event type delete
E0903 20:33:22.174729       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-5glk5" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0903 20:33:22.225970       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid 2c48ff57-48fd-4734-b01d-e73ac1985655, event type delete
I0903 20:33:22.229284       1 publisher.go:181] Finished syncing namespace "azuredisk-3086" (3.263041ms)
I0903 20:33:22.240151       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0903 20:33:22.240324       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid b2ca6416-dd80-4f27-8454-ea6069e1490b, event type delete
I0903 20:33:22.240451       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.1µs)
I0903 20:33:22.302445       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3µs)
... skipping 288 lines ...
I0903 20:33:57.632135       1 pv_controller.go:1108] reclaimVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: policy is Delete
I0903 20:33:57.632232       1 pv_controller.go:1753] scheduleOperation[delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]]
I0903 20:33:57.632320       1 pv_controller.go:1764] operation "delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]" is already running, skipping
I0903 20:33:57.631930       1 pv_controller.go:1232] deleteVolumeOperation [pvc-efb891df-4bc2-4b12-9f95-b636e179f837] started
I0903 20:33:57.634138       1 pv_controller.go:1341] isVolumeReleased[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is released
I0903 20:33:57.634326       1 pv_controller.go:1405] doDeleteVolume [pvc-efb891df-4bc2-4b12-9f95-b636e179f837]
I0903 20:33:57.681035       1 pv_controller.go:1260] deletion of volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:33:57.681084       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: set phase Failed
I0903 20:33:57.681105       1 pv_controller.go:858] updating PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: set phase Failed
I0903 20:33:57.685533       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3182
I0903 20:33:57.685570       1 pv_controller.go:879] volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" entered phase "Failed"
I0903 20:33:57.685580       1 pv_controller.go:901] volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
E0903 20:33:57.685891       1 goroutinemap.go:150] Operation for "delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]" failed. No retries permitted until 2022-09-03 20:33:58.185648235 +0000 UTC m=+1193.417856028 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:33:57.685994       1 event.go:291] "Event occurred" object="pvc-efb891df-4bc2-4b12-9f95-b636e179f837" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:33:57.686465       1 pv_protection_controller.go:205] Got event on PV pvc-efb891df-4bc2-4b12-9f95-b636e179f837
I0903 20:33:57.686673       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3182
I0903 20:33:57.686753       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase: Failed, bound to: "azuredisk-1387/pvc-w2fpv (uid: efb891df-4bc2-4b12-9f95-b636e179f837)", boundByController: true
I0903 20:33:57.686793       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is bound to claim azuredisk-1387/pvc-w2fpv
I0903 20:33:57.686864       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: claim azuredisk-1387/pvc-w2fpv not found
I0903 20:33:57.686877       1 pv_controller.go:1108] reclaimVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: policy is Delete
I0903 20:33:57.686931       1 pv_controller.go:1753] scheduleOperation[delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]]
I0903 20:33:57.686946       1 pv_controller.go:1766] operation "delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]" postponed due to exponential backoff
I0903 20:33:58.186340       1 gc_controller.go:161] GC'ing orphaned
... skipping 7 lines ...
I0903 20:34:03.199648       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: volume is bound to claim azuredisk-1387/pvc-5r8ww
I0903 20:34:03.199663       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: claim azuredisk-1387/pvc-5r8ww found: phase: Bound, bound to: "pvc-47a07875-77fd-487f-bb48-b546b9084565", bindCompleted: true, boundByController: true
I0903 20:34:03.199676       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: all is bound
I0903 20:34:03.199685       1 pv_controller.go:858] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: set phase Bound
I0903 20:34:03.199694       1 pv_controller.go:861] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: phase Bound already set
I0903 20:34:03.199707       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3182
I0903 20:34:03.199725       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase: Failed, bound to: "azuredisk-1387/pvc-w2fpv (uid: efb891df-4bc2-4b12-9f95-b636e179f837)", boundByController: true
I0903 20:34:03.199739       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-1387/pvc-5r8ww]: volume "pvc-47a07875-77fd-487f-bb48-b546b9084565" found: phase: Bound, bound to: "azuredisk-1387/pvc-5r8ww (uid: 47a07875-77fd-487f-bb48-b546b9084565)", boundByController: true
I0903 20:34:03.199746       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is bound to claim azuredisk-1387/pvc-w2fpv
I0903 20:34:03.199751       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-1387/pvc-5r8ww]: claim is already correctly bound
I0903 20:34:03.199760       1 pv_controller.go:1012] binding volume "pvc-47a07875-77fd-487f-bb48-b546b9084565" to claim "azuredisk-1387/pvc-5r8ww"
I0903 20:34:03.199766       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: claim azuredisk-1387/pvc-w2fpv not found
I0903 20:34:03.199770       1 pv_controller.go:910] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: binding to "azuredisk-1387/pvc-5r8ww"
... skipping 32 lines ...
I0903 20:34:03.206305       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: claim azuredisk-1387/pvc-lvlbc found: phase: Bound, bound to: "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1", bindCompleted: true, boundByController: true
I0903 20:34:03.206338       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: all is bound
I0903 20:34:03.206352       1 pv_controller.go:858] updating PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: set phase Bound
I0903 20:34:03.206362       1 pv_controller.go:861] updating PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: phase Bound already set
I0903 20:34:03.212312       1 pv_controller.go:1341] isVolumeReleased[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is released
I0903 20:34:03.212333       1 pv_controller.go:1405] doDeleteVolume [pvc-efb891df-4bc2-4b12-9f95-b636e179f837]
I0903 20:34:03.286770       1 pv_controller.go:1260] deletion of volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:34:03.286803       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: set phase Failed
I0903 20:34:03.286813       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase Failed already set
E0903 20:34:03.286845       1 goroutinemap.go:150] Operation for "delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]" failed. No retries permitted until 2022-09-03 20:34:04.286821789 +0000 UTC m=+1199.519029682 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:34:03.519915       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="74.501µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:45368" resp=200
I0903 20:34:05.135493       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:34:05.135528       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565 to the node "capz-9buiac-mp-0000000" mounted false
I0903 20:34:05.135539       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837 to the node "capz-9buiac-mp-0000000" mounted false
I0903 20:34:05.135572       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 to the node "capz-9buiac-mp-0000000" mounted false
I0903 20:34:05.180377       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
... skipping 71 lines ...
I0903 20:34:18.200802       1 pv_controller.go:858] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: set phase Bound
I0903 20:34:18.200812       1 pv_controller.go:861] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: phase Bound already set
I0903 20:34:18.200812       1 pv_controller.go:1038] volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" bound to claim "azuredisk-1387/pvc-lvlbc"
I0903 20:34:18.200825       1 pv_controller.go:1039] volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-lvlbc (uid: 83df71c4-0be9-4454-90b4-bc53dc6e01f1)", boundByController: true
I0903 20:34:18.200828       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3182
I0903 20:34:18.200838       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-lvlbc" status after binding: phase: Bound, bound to: "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1", bindCompleted: true, boundByController: true
I0903 20:34:18.200846       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase: Failed, bound to: "azuredisk-1387/pvc-w2fpv (uid: efb891df-4bc2-4b12-9f95-b636e179f837)", boundByController: true
I0903 20:34:18.200867       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is bound to claim azuredisk-1387/pvc-w2fpv
I0903 20:34:18.200883       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: claim azuredisk-1387/pvc-w2fpv not found
I0903 20:34:18.200919       1 pv_controller.go:1108] reclaimVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: policy is Delete
I0903 20:34:18.200991       1 pv_controller.go:1753] scheduleOperation[delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]]
I0903 20:34:18.201073       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" with version 3095
I0903 20:34:18.201184       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: phase: Bound, bound to: "azuredisk-1387/pvc-lvlbc (uid: 83df71c4-0be9-4454-90b4-bc53dc6e01f1)", boundByController: true
... skipping 7 lines ...
I0903 20:34:18.208337       1 pv_controller.go:1405] doDeleteVolume [pvc-efb891df-4bc2-4b12-9f95-b636e179f837]
I0903 20:34:18.226592       1 controller.go:272] Triggering nodeSync
I0903 20:34:18.226623       1 controller.go:291] nodeSync has been triggered
I0903 20:34:18.226632       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 20:34:18.226667       1 controller.go:790] Finished updateLoadBalancerHosts
I0903 20:34:18.226673       1 controller.go:731] It took 4.4e-05 seconds to finish nodeSyncInternal
I0903 20:34:18.231885       1 pv_controller.go:1260] deletion of volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:34:18.231905       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: set phase Failed
I0903 20:34:18.231915       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase Failed already set
E0903 20:34:18.231954       1 goroutinemap.go:150] Operation for "delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]" failed. No retries permitted until 2022-09-03 20:34:20.231924605 +0000 UTC m=+1215.464132498 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:34:18.401070       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0903 20:34:19.269418       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:34:20.529632       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565) returned with <nil>
I0903 20:34:20.529692       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565) succeeded
I0903 20:34:20.529702       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565 was detached from node:capz-9buiac-mp-0000000
I0903 20:34:20.529724       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-47a07875-77fd-487f-bb48-b546b9084565" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-47a07875-77fd-487f-bb48-b546b9084565") on node "capz-9buiac-mp-0000000" 
... skipping 42 lines ...
I0903 20:34:33.201568       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: volume is bound to claim azuredisk-1387/pvc-5r8ww
I0903 20:34:33.201640       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: claim azuredisk-1387/pvc-5r8ww found: phase: Bound, bound to: "pvc-47a07875-77fd-487f-bb48-b546b9084565", bindCompleted: true, boundByController: true
I0903 20:34:33.201692       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: all is bound
I0903 20:34:33.201737       1 pv_controller.go:858] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: set phase Bound
I0903 20:34:33.201766       1 pv_controller.go:861] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: phase Bound already set
I0903 20:34:33.201807       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3182
I0903 20:34:33.201879       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase: Failed, bound to: "azuredisk-1387/pvc-w2fpv (uid: efb891df-4bc2-4b12-9f95-b636e179f837)", boundByController: true
I0903 20:34:33.201937       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is bound to claim azuredisk-1387/pvc-w2fpv
I0903 20:34:33.201977       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: claim azuredisk-1387/pvc-w2fpv not found
I0903 20:34:33.202068       1 pv_controller.go:1108] reclaimVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: policy is Delete
I0903 20:34:33.202085       1 pv_controller.go:1753] scheduleOperation[delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]]
I0903 20:34:33.202142       1 pv_controller.go:1232] deleteVolumeOperation [pvc-efb891df-4bc2-4b12-9f95-b636e179f837] started
I0903 20:34:33.202423       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" with version 3095
... skipping 2 lines ...
I0903 20:34:33.202628       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: claim azuredisk-1387/pvc-lvlbc found: phase: Bound, bound to: "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1", bindCompleted: true, boundByController: true
I0903 20:34:33.202680       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: all is bound
I0903 20:34:33.202712       1 pv_controller.go:858] updating PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: set phase Bound
I0903 20:34:33.202749       1 pv_controller.go:861] updating PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: phase Bound already set
I0903 20:34:33.216483       1 pv_controller.go:1341] isVolumeReleased[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is released
I0903 20:34:33.216502       1 pv_controller.go:1405] doDeleteVolume [pvc-efb891df-4bc2-4b12-9f95-b636e179f837]
I0903 20:34:33.216584       1 pv_controller.go:1260] deletion of volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) since it's in attaching or detaching state
I0903 20:34:33.216598       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: set phase Failed
I0903 20:34:33.216607       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase Failed already set
E0903 20:34:33.216680       1 goroutinemap.go:150] Operation for "delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]" failed. No retries permitted until 2022-09-03 20:34:37.216635779 +0000 UTC m=+1232.448843572 (durationBeforeRetry 4s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) since it's in attaching or detaching state"
I0903 20:34:33.519804       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43952" resp=200
I0903 20:34:34.136174       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 9 items received
I0903 20:34:35.822702       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) returned with <nil>
I0903 20:34:35.822768       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837) succeeded
I0903 20:34:35.822780       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837 was detached from node:capz-9buiac-mp-0000000
I0903 20:34:35.822951       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-9buiac-mp-0000000, refreshing the cache
... skipping 21 lines ...
I0903 20:34:48.202131       1 pv_controller.go:861] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: phase Bound already set
I0903 20:34:48.202188       1 pv_controller.go:922] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: already bound to "azuredisk-1387/pvc-5r8ww"
I0903 20:34:48.202197       1 pv_controller.go:858] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: set phase Bound
I0903 20:34:48.202206       1 pv_controller.go:861] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: phase Bound already set
I0903 20:34:48.202214       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-1387/pvc-5r8ww]: binding to "pvc-47a07875-77fd-487f-bb48-b546b9084565"
I0903 20:34:48.202279       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3182
I0903 20:34:48.202302       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase: Failed, bound to: "azuredisk-1387/pvc-w2fpv (uid: efb891df-4bc2-4b12-9f95-b636e179f837)", boundByController: true
I0903 20:34:48.202368       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is bound to claim azuredisk-1387/pvc-w2fpv
I0903 20:34:48.202402       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-1387/pvc-5r8ww]: already bound to "pvc-47a07875-77fd-487f-bb48-b546b9084565"
I0903 20:34:48.202412       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-1387/pvc-5r8ww] status: set phase Bound
I0903 20:34:48.202458       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: claim azuredisk-1387/pvc-w2fpv not found
I0903 20:34:48.202469       1 pv_controller.go:1108] reclaimVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: policy is Delete
I0903 20:34:48.202507       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-5r8ww] status: phase Bound already set
... skipping 32 lines ...
I0903 20:34:53.446124       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-efb891df-4bc2-4b12-9f95-b636e179f837
I0903 20:34:53.446326       1 pv_controller.go:1436] volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" deleted
I0903 20:34:53.446413       1 pv_controller.go:1284] deleteVolumeOperation [pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: success
I0903 20:34:53.458935       1 pv_protection_controller.go:205] Got event on PV pvc-efb891df-4bc2-4b12-9f95-b636e179f837
I0903 20:34:53.458964       1 pv_protection_controller.go:125] Processing PV pvc-efb891df-4bc2-4b12-9f95-b636e179f837
I0903 20:34:53.459023       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" with version 3266
I0903 20:34:53.459130       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: phase: Failed, bound to: "azuredisk-1387/pvc-w2fpv (uid: efb891df-4bc2-4b12-9f95-b636e179f837)", boundByController: true
I0903 20:34:53.459198       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: volume is bound to claim azuredisk-1387/pvc-w2fpv
I0903 20:34:53.459272       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: claim azuredisk-1387/pvc-w2fpv not found
I0903 20:34:53.459288       1 pv_controller.go:1108] reclaimVolume[pvc-efb891df-4bc2-4b12-9f95-b636e179f837]: policy is Delete
I0903 20:34:53.459315       1 pv_controller.go:1753] scheduleOperation[delete-pvc-efb891df-4bc2-4b12-9f95-b636e179f837[50f54cc1-ae21-4512-b6af-b7423952a9ce]]
I0903 20:34:53.459381       1 pv_controller.go:1232] deleteVolumeOperation [pvc-efb891df-4bc2-4b12-9f95-b636e179f837] started
I0903 20:34:53.463671       1 pv_controller.go:1244] Volume "pvc-efb891df-4bc2-4b12-9f95-b636e179f837" is already being deleted
... skipping 46 lines ...
I0903 20:34:54.410858       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]]
I0903 20:34:54.411008       1 pv_controller.go:1764] operation "delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]" is already running, skipping
I0903 20:34:54.410378       1 pv_protection_controller.go:205] Got event on PV pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1
I0903 20:34:54.410967       1 pv_controller.go:1232] deleteVolumeOperation [pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1] started
I0903 20:34:54.412586       1 pv_controller.go:1341] isVolumeReleased[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: volume is released
I0903 20:34:54.412602       1 pv_controller.go:1405] doDeleteVolume [pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]
I0903 20:34:54.412686       1 pv_controller.go:1260] deletion of volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1) since it's in attaching or detaching state
I0903 20:34:54.412742       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: set phase Failed
I0903 20:34:54.412766       1 pv_controller.go:858] updating PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: set phase Failed
I0903 20:34:54.415355       1 pv_protection_controller.go:205] Got event on PV pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1
I0903 20:34:54.415528       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" with version 3275
I0903 20:34:54.415719       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: phase: Failed, bound to: "azuredisk-1387/pvc-lvlbc (uid: 83df71c4-0be9-4454-90b4-bc53dc6e01f1)", boundByController: true
I0903 20:34:54.416033       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: volume is bound to claim azuredisk-1387/pvc-lvlbc
I0903 20:34:54.416056       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: claim azuredisk-1387/pvc-lvlbc not found
I0903 20:34:54.416063       1 pv_controller.go:1108] reclaimVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: policy is Delete
I0903 20:34:54.416075       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]]
I0903 20:34:54.416190       1 pv_controller.go:1764] operation "delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]" is already running, skipping
I0903 20:34:54.415985       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" with version 3275
I0903 20:34:54.416269       1 pv_controller.go:879] volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" entered phase "Failed"
I0903 20:34:54.416284       1 pv_controller.go:901] volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1) since it's in attaching or detaching state
E0903 20:34:54.416437       1 goroutinemap.go:150] Operation for "delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]" failed. No retries permitted until 2022-09-03 20:34:54.916414695 +0000 UTC m=+1250.148622588 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1) since it's in attaching or detaching state"
I0903 20:34:54.416601       1 event.go:291] "Event occurred" object="pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1) since it's in attaching or detaching state"
I0903 20:34:56.130795       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 2 items received
I0903 20:34:56.176801       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1) returned with <nil>
I0903 20:34:56.176852       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1) succeeded
I0903 20:34:56.176860       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1 was detached from node:capz-9buiac-mp-0000000
I0903 20:34:56.176879       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1") on node "capz-9buiac-mp-0000000" 
I0903 20:34:58.189282       1 gc_controller.go:161] GC'ing orphaned
... skipping 5 lines ...
I0903 20:35:03.202545       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: volume is bound to claim azuredisk-1387/pvc-5r8ww
I0903 20:35:03.202606       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: claim azuredisk-1387/pvc-5r8ww found: phase: Bound, bound to: "pvc-47a07875-77fd-487f-bb48-b546b9084565", bindCompleted: true, boundByController: true
I0903 20:35:03.202626       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: all is bound
I0903 20:35:03.202712       1 pv_controller.go:858] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: set phase Bound
I0903 20:35:03.202775       1 pv_controller.go:861] updating PersistentVolume[pvc-47a07875-77fd-487f-bb48-b546b9084565]: phase Bound already set
I0903 20:35:03.202860       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" with version 3275
I0903 20:35:03.202901       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: phase: Failed, bound to: "azuredisk-1387/pvc-lvlbc (uid: 83df71c4-0be9-4454-90b4-bc53dc6e01f1)", boundByController: true
I0903 20:35:03.202945       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: volume is bound to claim azuredisk-1387/pvc-lvlbc
I0903 20:35:03.202992       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: claim azuredisk-1387/pvc-lvlbc not found
I0903 20:35:03.203043       1 pv_controller.go:1108] reclaimVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: policy is Delete
I0903 20:35:03.203134       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]]
I0903 20:35:03.203199       1 pv_controller.go:1232] deleteVolumeOperation [pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1] started
I0903 20:35:03.202783       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-5r8ww" with version 3088
... skipping 19 lines ...
I0903 20:35:08.383902       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1
I0903 20:35:08.383945       1 pv_controller.go:1436] volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" deleted
I0903 20:35:08.383961       1 pv_controller.go:1284] deleteVolumeOperation [pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: success
I0903 20:35:08.393292       1 pv_protection_controller.go:205] Got event on PV pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1
I0903 20:35:08.393322       1 pv_protection_controller.go:125] Processing PV pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1
I0903 20:35:08.393451       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" with version 3297
I0903 20:35:08.393514       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: phase: Failed, bound to: "azuredisk-1387/pvc-lvlbc (uid: 83df71c4-0be9-4454-90b4-bc53dc6e01f1)", boundByController: true
I0903 20:35:08.393550       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: volume is bound to claim azuredisk-1387/pvc-lvlbc
I0903 20:35:08.393599       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: claim azuredisk-1387/pvc-lvlbc not found
I0903 20:35:08.393614       1 pv_controller.go:1108] reclaimVolume[pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1]: policy is Delete
I0903 20:35:08.393646       1 pv_controller.go:1753] scheduleOperation[delete-pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1[c1ab3aa3-bc03-4b28-9b50-f8e62a0c98c7]]
I0903 20:35:08.393711       1 pv_controller.go:1232] deleteVolumeOperation [pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1] started
I0903 20:35:08.398032       1 pv_controller.go:1244] Volume "pvc-83df71c4-0be9-4454-90b4-bc53dc6e01f1" is already being deleted
... skipping 267 lines ...
I0903 20:35:26.065830       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf" lun 0 to node "capz-9buiac-mp-0000000".
I0903 20:35:26.065912       1 azure_controller_vmss.go:101] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - attach disk(capz-9buiac-dynamic-pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf) with DiskEncryptionSetID()
I0903 20:35:26.105434       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0903 20:35:26.123433       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid bd7dd621-3956-4736-ae8b-736526f45dc2, event type delete
I0903 20:35:26.125741       1 publisher.go:181] Finished syncing namespace "azuredisk-1387" (1.974825ms)
I0903 20:35:26.134585       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-mjwjw, uid 32fa0928-9368-475b-a157-d00d25f9f6e6, event type delete
E0903 20:35:26.148838       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-p9cg9" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0903 20:35:26.150582       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0903 20:35:26.150768       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.6µs)
I0903 20:35:26.150868       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid e25d2ba5-55f7-4cd9-81e2-f99f14eb7765, event type delete
I0903 20:35:26.198698       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name azuredisk-volume-tester-qd4t9.171173eda5aea38f, uid 01808758-5718-40d3-a81b-f6f8ad4b971a, event type delete
I0903 20:35:26.201543       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name azuredisk-volume-tester-qd4t9.171173f03405ad09, uid 370bb7d9-4beb-4495-bd82-04a14728e09e, event type delete
I0903 20:35:26.206545       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name azuredisk-volume-tester-qd4t9.171173f29d269137, uid 68060c56-1e87-4315-9e60-54fa611861ae, event type delete
... skipping 233 lines ...
I0903 20:36:01.472121       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: claim azuredisk-4547/pvc-pt244 not found
I0903 20:36:01.472246       1 pv_controller.go:1108] reclaimVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: policy is Delete
I0903 20:36:01.472373       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]]
I0903 20:36:01.472467       1 pv_controller.go:1764] operation "delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]" is already running, skipping
I0903 20:36:01.476885       1 pv_controller.go:1341] isVolumeReleased[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is released
I0903 20:36:01.477047       1 pv_controller.go:1405] doDeleteVolume [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]
I0903 20:36:01.506577       1 pv_controller.go:1260] deletion of volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:36:01.506601       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: set phase Failed
I0903 20:36:01.506610       1 pv_controller.go:858] updating PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: set phase Failed
I0903 20:36:01.513810       1 pv_protection_controller.go:205] Got event on PV pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f
I0903 20:36:01.513807       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" with version 3452
I0903 20:36:01.514123       1 pv_controller.go:879] volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" entered phase "Failed"
I0903 20:36:01.514266       1 pv_controller.go:901] volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
E0903 20:36:01.514380       1 goroutinemap.go:150] Operation for "delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]" failed. No retries permitted until 2022-09-03 20:36:02.014349372 +0000 UTC m=+1317.246557265 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:36:01.513835       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" with version 3452
I0903 20:36:01.514726       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase: Failed, bound to: "azuredisk-4547/pvc-pt244 (uid: 2e21b4f1-8d78-44f6-ad25-f987b8b8818f)", boundByController: true
I0903 20:36:01.514754       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is bound to claim azuredisk-4547/pvc-pt244
I0903 20:36:01.514777       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: claim azuredisk-4547/pvc-pt244 not found
I0903 20:36:01.514792       1 pv_controller.go:1108] reclaimVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: policy is Delete
I0903 20:36:01.514806       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]]
I0903 20:36:01.514818       1 pv_controller.go:1766] operation "delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]" postponed due to exponential backoff
I0903 20:36:01.514612       1 event.go:291] "Event occurred" object="pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
... skipping 21 lines ...
I0903 20:36:03.205415       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: volume is bound to claim azuredisk-4547/pvc-qvz5m
I0903 20:36:03.205431       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: claim azuredisk-4547/pvc-qvz5m found: phase: Bound, bound to: "pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf", bindCompleted: true, boundByController: true
I0903 20:36:03.205445       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: all is bound
I0903 20:36:03.205455       1 pv_controller.go:858] updating PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: set phase Bound
I0903 20:36:03.205464       1 pv_controller.go:861] updating PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: phase Bound already set
I0903 20:36:03.205475       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" with version 3452
I0903 20:36:03.205494       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase: Failed, bound to: "azuredisk-4547/pvc-pt244 (uid: 2e21b4f1-8d78-44f6-ad25-f987b8b8818f)", boundByController: true
I0903 20:36:03.205548       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is bound to claim azuredisk-4547/pvc-pt244
I0903 20:36:03.205583       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: claim azuredisk-4547/pvc-pt244 not found
I0903 20:36:03.205680       1 pv_controller.go:1108] reclaimVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: policy is Delete
I0903 20:36:03.205757       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]]
I0903 20:36:03.205806       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f] started
I0903 20:36:03.211338       1 pv_controller.go:1341] isVolumeReleased[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is released
I0903 20:36:03.211358       1 pv_controller.go:1405] doDeleteVolume [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]
I0903 20:36:03.238884       1 pv_controller.go:1260] deletion of volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:36:03.238907       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: set phase Failed
I0903 20:36:03.238916       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase Failed already set
E0903 20:36:03.238953       1 goroutinemap.go:150] Operation for "delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]" failed. No retries permitted until 2022-09-03 20:36:04.238925491 +0000 UTC m=+1319.471133384 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:36:03.520944       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="79.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:41876" resp=200
I0903 20:36:05.207660       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:36:05.207699       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf to the node "capz-9buiac-mp-0000000" mounted false
I0903 20:36:05.207709       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f to the node "capz-9buiac-mp-0000000" mounted false
I0903 20:36:05.290481       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-9buiac-mp-0000000"
I0903 20:36:05.290515       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf to the node "capz-9buiac-mp-0000000" mounted false
... skipping 48 lines ...
I0903 20:36:18.206578       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-4547/pvc-qvz5m] status: set phase Bound
I0903 20:36:18.206600       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4547/pvc-qvz5m] status: phase Bound already set
I0903 20:36:18.206612       1 pv_controller.go:1038] volume "pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf" bound to claim "azuredisk-4547/pvc-qvz5m"
I0903 20:36:18.206632       1 pv_controller.go:1039] volume "pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf" status after binding: phase: Bound, bound to: "azuredisk-4547/pvc-qvz5m (uid: 0377e07f-1b5c-44da-a38a-14f1ae799ebf)", boundByController: true
I0903 20:36:18.206650       1 pv_controller.go:1040] claim "azuredisk-4547/pvc-qvz5m" status after binding: phase: Bound, bound to: "pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf", bindCompleted: true, boundByController: true
I0903 20:36:18.206813       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" with version 3452
I0903 20:36:18.206840       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase: Failed, bound to: "azuredisk-4547/pvc-pt244 (uid: 2e21b4f1-8d78-44f6-ad25-f987b8b8818f)", boundByController: true
I0903 20:36:18.206864       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is bound to claim azuredisk-4547/pvc-pt244
I0903 20:36:18.206904       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: claim azuredisk-4547/pvc-pt244 not found
I0903 20:36:18.206911       1 pv_controller.go:1108] reclaimVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: policy is Delete
I0903 20:36:18.206928       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]]
I0903 20:36:18.206985       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f] started
I0903 20:36:18.212771       1 pv_controller.go:1341] isVolumeReleased[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is released
I0903 20:36:18.212793       1 pv_controller.go:1405] doDeleteVolume [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]
I0903 20:36:18.212853       1 pv_controller.go:1260] deletion of volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) since it's in attaching or detaching state
I0903 20:36:18.212868       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: set phase Failed
I0903 20:36:18.212878       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase Failed already set
E0903 20:36:18.212929       1 goroutinemap.go:150] Operation for "delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]" failed. No retries permitted until 2022-09-03 20:36:20.212885457 +0000 UTC m=+1335.445093250 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) since it's in attaching or detaching state"
I0903 20:36:19.342406       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:36:20.991947       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0903 20:36:23.520370       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="90.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:47692" resp=200
I0903 20:36:25.130962       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 52 items received
I0903 20:36:30.919850       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) returned with <nil>
I0903 20:36:30.919902       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f) succeeded
... skipping 7 lines ...
I0903 20:36:33.206744       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: volume is bound to claim azuredisk-4547/pvc-qvz5m
I0903 20:36:33.206803       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: claim azuredisk-4547/pvc-qvz5m found: phase: Bound, bound to: "pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf", bindCompleted: true, boundByController: true
I0903 20:36:33.206820       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: all is bound
I0903 20:36:33.206828       1 pv_controller.go:858] updating PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: set phase Bound
I0903 20:36:33.206836       1 pv_controller.go:861] updating PersistentVolume[pvc-0377e07f-1b5c-44da-a38a-14f1ae799ebf]: phase Bound already set
I0903 20:36:33.206890       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" with version 3452
I0903 20:36:33.206920       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase: Failed, bound to: "azuredisk-4547/pvc-pt244 (uid: 2e21b4f1-8d78-44f6-ad25-f987b8b8818f)", boundByController: true
I0903 20:36:33.206971       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is bound to claim azuredisk-4547/pvc-pt244
I0903 20:36:33.206992       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: claim azuredisk-4547/pvc-pt244 not found
I0903 20:36:33.207016       1 pv_controller.go:1108] reclaimVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: policy is Delete
I0903 20:36:33.207036       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]]
I0903 20:36:33.207064       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f] started
I0903 20:36:33.207184       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-4547/pvc-qvz5m" with version 3350
... skipping 20 lines ...
I0903 20:36:38.531709       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f
I0903 20:36:38.531748       1 pv_controller.go:1436] volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" deleted
I0903 20:36:38.531760       1 pv_controller.go:1284] deleteVolumeOperation [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: success
I0903 20:36:38.541485       1 pv_protection_controller.go:205] Got event on PV pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f
I0903 20:36:38.541515       1 pv_protection_controller.go:125] Processing PV pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f
I0903 20:36:38.541916       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" with version 3508
I0903 20:36:38.541953       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: phase: Failed, bound to: "azuredisk-4547/pvc-pt244 (uid: 2e21b4f1-8d78-44f6-ad25-f987b8b8818f)", boundByController: true
I0903 20:36:38.541983       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: volume is bound to claim azuredisk-4547/pvc-pt244
I0903 20:36:38.542143       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: claim azuredisk-4547/pvc-pt244 not found
I0903 20:36:38.542160       1 pv_controller.go:1108] reclaimVolume[pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f]: policy is Delete
I0903 20:36:38.542175       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f[143f983d-3216-4d5c-a3bd-a585f5e9e074]]
I0903 20:36:38.542201       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f] started
I0903 20:36:38.545129       1 pv_controller.go:1244] Volume "pvc-2e21b4f1-8d78-44f6-ad25-f987b8b8818f" is already being deleted
... skipping 384 lines ...
I0903 20:37:01.561959       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-9buiac-mp-0000000, refreshing the cache
I0903 20:37:01.561960       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-9buiac-mp-0000000, refreshing the cache
I0903 20:37:01.572569       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9183
I0903 20:37:01.613680       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9183, name kube-root-ca.crt, uid c44fbb80-c52e-4cf7-969f-32b80749f3d7, event type delete
I0903 20:37:01.616786       1 publisher.go:181] Finished syncing namespace "azuredisk-9183" (3.047838ms)
I0903 20:37:01.649675       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9183, name default-token-slxv9, uid 9c358895-d72d-4bfb-8b19-31ba26511427, event type delete
E0903 20:37:01.660784       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9183/default: secrets "default-token-j4gx8" is forbidden: unable to create new content in namespace azuredisk-9183 because it is being terminated
I0903 20:37:01.681705       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9183/default), service account deleted, removing tokens
I0903 20:37:01.681750       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9183" (1.5µs)
I0903 20:37:01.682082       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9183, name default, uid d2b5b270-c874-4bbc-8ee3-ea23c2516be8, event type delete
I0903 20:37:01.708493       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9183, estimate: 0, errors: <nil>
I0903 20:37:01.708843       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9183" (1.9µs)
I0903 20:37:01.717345       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9183" (146.841921ms)
... skipping 527 lines ...
I0903 20:38:08.815954       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: claim azuredisk-7578/pvc-ntfnm not found
I0903 20:38:08.815965       1 pv_controller.go:1108] reclaimVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: policy is Delete
I0903 20:38:08.815980       1 pv_controller.go:1753] scheduleOperation[delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]]
I0903 20:38:08.815991       1 pv_controller.go:1764] operation "delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]" is already running, skipping
I0903 20:38:08.817216       1 pv_controller.go:1341] isVolumeReleased[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: volume is released
I0903 20:38:08.817236       1 pv_controller.go:1405] doDeleteVolume [pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]
I0903 20:38:08.843929       1 pv_controller.go:1260] deletion of volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
I0903 20:38:08.843954       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: set phase Failed
I0903 20:38:08.843965       1 pv_controller.go:858] updating PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: set phase Failed
I0903 20:38:08.847302       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" with version 3749
I0903 20:38:08.847475       1 pv_controller.go:879] volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" entered phase "Failed"
I0903 20:38:08.847606       1 pv_controller.go:901] volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted
E0903 20:38:08.847664       1 goroutinemap.go:150] Operation for "delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]" failed. No retries permitted until 2022-09-03 20:38:09.347636223 +0000 UTC m=+1444.579844116 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
I0903 20:38:08.847347       1 pv_protection_controller.go:205] Got event on PV pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f
I0903 20:38:08.847365       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" with version 3749
I0903 20:38:08.847794       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: phase: Failed, bound to: "azuredisk-7578/pvc-ntfnm (uid: 86eae2ed-729d-409d-a0ac-57defb56ae4f)", boundByController: true
I0903 20:38:08.847880       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: volume is bound to claim azuredisk-7578/pvc-ntfnm
I0903 20:38:08.847934       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: claim azuredisk-7578/pvc-ntfnm not found
I0903 20:38:08.847945       1 pv_controller.go:1108] reclaimVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: policy is Delete
I0903 20:38:08.847960       1 pv_controller.go:1753] scheduleOperation[delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]]
I0903 20:38:08.847969       1 pv_controller.go:1766] operation "delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]" postponed due to exponential backoff
I0903 20:38:08.848059       1 event.go:291] "Event occurred" object="pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/virtualMachineScaleSets/capz-9buiac-mp-0/virtualMachines/capz-9buiac-mp-0_0), could not be deleted"
... skipping 42 lines ...
I0903 20:38:18.211129       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: volume is bound to claim azuredisk-7578/pvc-lj7pm
I0903 20:38:18.211183       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: claim azuredisk-7578/pvc-lj7pm found: phase: Bound, bound to: "pvc-140e723b-90be-42c6-a322-5026194a3f94", bindCompleted: true, boundByController: true
I0903 20:38:18.211221       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: all is bound
I0903 20:38:18.211245       1 pv_controller.go:858] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: set phase Bound
I0903 20:38:18.211255       1 pv_controller.go:861] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase Bound already set
I0903 20:38:18.211272       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" with version 3749
I0903 20:38:18.211298       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: phase: Failed, bound to: "azuredisk-7578/pvc-ntfnm (uid: 86eae2ed-729d-409d-a0ac-57defb56ae4f)", boundByController: true
I0903 20:38:18.211323       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: volume is bound to claim azuredisk-7578/pvc-ntfnm
I0903 20:38:18.211341       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: claim azuredisk-7578/pvc-ntfnm not found
I0903 20:38:18.211348       1 pv_controller.go:1108] reclaimVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: policy is Delete
I0903 20:38:18.211364       1 pv_controller.go:1753] scheduleOperation[delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]]
I0903 20:38:18.211378       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" with version 3617
I0903 20:38:18.211400       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: phase: Bound, bound to: "azuredisk-7578/pvc-8lp2w (uid: df7cf21e-c296-4449-a855-32571dc286e4)", boundByController: true
... skipping 34 lines ...
I0903 20:38:18.211916       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-lj7pm] status: phase Bound already set
I0903 20:38:18.211930       1 pv_controller.go:1038] volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" bound to claim "azuredisk-7578/pvc-lj7pm"
I0903 20:38:18.211944       1 pv_controller.go:1039] volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-lj7pm (uid: 140e723b-90be-42c6-a322-5026194a3f94)", boundByController: true
I0903 20:38:18.211958       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-lj7pm" status after binding: phase: Bound, bound to: "pvc-140e723b-90be-42c6-a322-5026194a3f94", bindCompleted: true, boundByController: true
I0903 20:38:18.218042       1 pv_controller.go:1341] isVolumeReleased[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: volume is released
I0903 20:38:18.218060       1 pv_controller.go:1405] doDeleteVolume [pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]
I0903 20:38:18.218096       1 pv_controller.go:1260] deletion of volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f) since it's in attaching or detaching state
I0903 20:38:18.218111       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: set phase Failed
I0903 20:38:18.218120       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: phase Failed already set
E0903 20:38:18.218161       1 goroutinemap.go:150] Operation for "delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]" failed. No retries permitted until 2022-09-03 20:38:19.218128025 +0000 UTC m=+1454.450335918 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f) since it's in attaching or detaching state"
I0903 20:38:18.524063       1 node_lifecycle_controller.go:1047] Node capz-9buiac-mp-0000000 ReadyCondition updated. Updating timestamp.
I0903 20:38:19.110365       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 11 items received
I0903 20:38:19.411468       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 20:38:20.142586       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 4 items received
I0903 20:38:21.930034       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0903 20:38:23.520645       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="72.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36256" resp=200
... skipping 19 lines ...
I0903 20:38:33.211707       1 pv_controller.go:1012] binding volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" to claim "azuredisk-7578/pvc-lj7pm"
I0903 20:38:33.211712       1 pv_controller.go:861] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase Bound already set
I0903 20:38:33.211716       1 pv_controller.go:910] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: binding to "azuredisk-7578/pvc-lj7pm"
I0903 20:38:33.211724       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" with version 3749
I0903 20:38:33.211730       1 pv_controller.go:922] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: already bound to "azuredisk-7578/pvc-lj7pm"
I0903 20:38:33.211738       1 pv_controller.go:858] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: set phase Bound
I0903 20:38:33.211740       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: phase: Failed, bound to: "azuredisk-7578/pvc-ntfnm (uid: 86eae2ed-729d-409d-a0ac-57defb56ae4f)", boundByController: true
I0903 20:38:33.211745       1 pv_controller.go:861] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase Bound already set
I0903 20:38:33.211753       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-7578/pvc-lj7pm]: binding to "pvc-140e723b-90be-42c6-a322-5026194a3f94"
I0903 20:38:33.211766       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: volume is bound to claim azuredisk-7578/pvc-ntfnm
I0903 20:38:33.211771       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-7578/pvc-lj7pm]: already bound to "pvc-140e723b-90be-42c6-a322-5026194a3f94"
I0903 20:38:33.211780       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-7578/pvc-lj7pm] status: set phase Bound
I0903 20:38:33.211783       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: claim azuredisk-7578/pvc-ntfnm not found
... skipping 35 lines ...
I0903 20:38:38.487026       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f
I0903 20:38:38.487064       1 pv_controller.go:1436] volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" deleted
I0903 20:38:38.487077       1 pv_controller.go:1284] deleteVolumeOperation [pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: success
I0903 20:38:38.495001       1 pv_protection_controller.go:205] Got event on PV pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f
I0903 20:38:38.495030       1 pv_protection_controller.go:125] Processing PV pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f
I0903 20:38:38.495333       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" with version 3795
I0903 20:38:38.495363       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: phase: Failed, bound to: "azuredisk-7578/pvc-ntfnm (uid: 86eae2ed-729d-409d-a0ac-57defb56ae4f)", boundByController: true
I0903 20:38:38.495387       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: volume is bound to claim azuredisk-7578/pvc-ntfnm
I0903 20:38:38.495403       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: claim azuredisk-7578/pvc-ntfnm not found
I0903 20:38:38.495415       1 pv_controller.go:1108] reclaimVolume[pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f]: policy is Delete
I0903 20:38:38.495429       1 pv_controller.go:1753] scheduleOperation[delete-pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f[d8a2ed3b-3205-407e-b420-b109f7b1222d]]
I0903 20:38:38.495450       1 pv_controller.go:1232] deleteVolumeOperation [pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f] started
I0903 20:38:38.503069       1 pv_controller.go:1244] Volume "pvc-86eae2ed-729d-409d-a0ac-57defb56ae4f" is already being deleted
... skipping 45 lines ...
I0903 20:38:40.068851       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: claim azuredisk-7578/pvc-8lp2w not found
I0903 20:38:40.068860       1 pv_controller.go:1108] reclaimVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: policy is Delete
I0903 20:38:40.068876       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]]
I0903 20:38:40.068889       1 pv_controller.go:1764] operation "delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]" is already running, skipping
I0903 20:38:40.070354       1 pv_controller.go:1341] isVolumeReleased[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: volume is released
I0903 20:38:40.070370       1 pv_controller.go:1405] doDeleteVolume [pvc-df7cf21e-c296-4449-a855-32571dc286e4]
I0903 20:38:40.070403       1 pv_controller.go:1260] deletion of volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4) since it's in attaching or detaching state
I0903 20:38:40.070419       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: set phase Failed
I0903 20:38:40.070446       1 pv_controller.go:858] updating PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: set phase Failed
I0903 20:38:40.073001       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" with version 3803
I0903 20:38:40.073336       1 pv_controller.go:879] volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" entered phase "Failed"
I0903 20:38:40.073354       1 pv_controller.go:901] volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4) since it's in attaching or detaching state
I0903 20:38:40.073286       1 pv_protection_controller.go:205] Got event on PV pvc-df7cf21e-c296-4449-a855-32571dc286e4
I0903 20:38:40.073304       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" with version 3803
I0903 20:38:40.073445       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: phase: Failed, bound to: "azuredisk-7578/pvc-8lp2w (uid: df7cf21e-c296-4449-a855-32571dc286e4)", boundByController: true
I0903 20:38:40.073476       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: volume is bound to claim azuredisk-7578/pvc-8lp2w
I0903 20:38:40.073518       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: claim azuredisk-7578/pvc-8lp2w not found
I0903 20:38:40.073554       1 pv_controller.go:1108] reclaimVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: policy is Delete
I0903 20:38:40.073592       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]]
I0903 20:38:40.073607       1 pv_controller.go:1764] operation "delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]" is already running, skipping
I0903 20:38:40.073708       1 event.go:291] "Event occurred" object="pvc-df7cf21e-c296-4449-a855-32571dc286e4" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4) since it's in attaching or detaching state"
E0903 20:38:40.073889       1 goroutinemap.go:150] Operation for "delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]" failed. No retries permitted until 2022-09-03 20:38:40.573864677 +0000 UTC m=+1475.806072570 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4) since it's in attaching or detaching state"
I0903 20:38:43.520292       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="79.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:34914" resp=200
I0903 20:38:46.015728       1 azure_controller_vmss.go:187] azureDisk - update(capz-9buiac): vm(capz-9buiac-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4) returned with <nil>
I0903 20:38:46.015799       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4) succeeded
I0903 20:38:46.015811       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4 was detached from node:capz-9buiac-mp-0000000
I0903 20:38:46.016022       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4") on node "capz-9buiac-mp-0000000" 
I0903 20:38:46.016116       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-9buiac-mp-0000000, refreshing the cache
... skipping 9 lines ...
I0903 20:38:48.211926       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: all is bound
I0903 20:38:48.211935       1 pv_controller.go:858] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: set phase Bound
I0903 20:38:48.211948       1 pv_controller.go:861] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase Bound already set
I0903 20:38:48.211962       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" with version 3803
I0903 20:38:48.211977       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-lj7pm" with version 3608
I0903 20:38:48.211994       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-lj7pm]: phase: Bound, bound to: "pvc-140e723b-90be-42c6-a322-5026194a3f94", bindCompleted: true, boundByController: true
I0903 20:38:48.211978       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: phase: Failed, bound to: "azuredisk-7578/pvc-8lp2w (uid: df7cf21e-c296-4449-a855-32571dc286e4)", boundByController: true
I0903 20:38:48.212080       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: volume is bound to claim azuredisk-7578/pvc-8lp2w
I0903 20:38:48.212098       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: claim azuredisk-7578/pvc-8lp2w not found
I0903 20:38:48.212106       1 pv_controller.go:1108] reclaimVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: policy is Delete
I0903 20:38:48.212134       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]]
I0903 20:38:48.212162       1 pv_controller.go:1232] deleteVolumeOperation [pvc-df7cf21e-c296-4449-a855-32571dc286e4] started
I0903 20:38:48.212169       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-lj7pm]: volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" found: phase: Bound, bound to: "azuredisk-7578/pvc-lj7pm (uid: 140e723b-90be-42c6-a322-5026194a3f94)", boundByController: true
... skipping 16 lines ...
I0903 20:38:53.442120       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-df7cf21e-c296-4449-a855-32571dc286e4
I0903 20:38:53.442157       1 pv_controller.go:1436] volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" deleted
I0903 20:38:53.442171       1 pv_controller.go:1284] deleteVolumeOperation [pvc-df7cf21e-c296-4449-a855-32571dc286e4]: success
I0903 20:38:53.452524       1 pv_protection_controller.go:205] Got event on PV pvc-df7cf21e-c296-4449-a855-32571dc286e4
I0903 20:38:53.452545       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" with version 3825
I0903 20:38:53.452562       1 pv_protection_controller.go:125] Processing PV pvc-df7cf21e-c296-4449-a855-32571dc286e4
I0903 20:38:53.452574       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: phase: Failed, bound to: "azuredisk-7578/pvc-8lp2w (uid: df7cf21e-c296-4449-a855-32571dc286e4)", boundByController: true
I0903 20:38:53.452596       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: volume is bound to claim azuredisk-7578/pvc-8lp2w
I0903 20:38:53.452612       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: claim azuredisk-7578/pvc-8lp2w not found
I0903 20:38:53.452619       1 pv_controller.go:1108] reclaimVolume[pvc-df7cf21e-c296-4449-a855-32571dc286e4]: policy is Delete
I0903 20:38:53.452634       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df7cf21e-c296-4449-a855-32571dc286e4[fc8cd0e7-43e5-47d8-9b71-d74c60db0ef1]]
I0903 20:38:53.452656       1 pv_controller.go:1232] deleteVolumeOperation [pvc-df7cf21e-c296-4449-a855-32571dc286e4] started
I0903 20:38:53.456129       1 pv_controller.go:1244] Volume "pvc-df7cf21e-c296-4449-a855-32571dc286e4" is already being deleted
... skipping 46 lines ...
I0903 20:38:56.011624       1 pv_controller.go:1108] reclaimVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: policy is Delete
I0903 20:38:56.011637       1 pv_controller.go:1753] scheduleOperation[delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]]
I0903 20:38:56.011645       1 pv_controller.go:1764] operation "delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]" is already running, skipping
I0903 20:38:56.011700       1 pv_controller.go:1232] deleteVolumeOperation [pvc-140e723b-90be-42c6-a322-5026194a3f94] started
I0903 20:38:56.013180       1 pv_controller.go:1341] isVolumeReleased[pvc-140e723b-90be-42c6-a322-5026194a3f94]: volume is released
I0903 20:38:56.013195       1 pv_controller.go:1405] doDeleteVolume [pvc-140e723b-90be-42c6-a322-5026194a3f94]
I0903 20:38:56.013269       1 pv_controller.go:1260] deletion of volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94) since it's in attaching or detaching state
I0903 20:38:56.013286       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-140e723b-90be-42c6-a322-5026194a3f94]: set phase Failed
I0903 20:38:56.013342       1 pv_controller.go:858] updating PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: set phase Failed
I0903 20:38:56.016020       1 pv_protection_controller.go:205] Got event on PV pvc-140e723b-90be-42c6-a322-5026194a3f94
I0903 20:38:56.016231       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" with version 3834
I0903 20:38:56.016482       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase: Failed, bound to: "azuredisk-7578/pvc-lj7pm (uid: 140e723b-90be-42c6-a322-5026194a3f94)", boundByController: true
I0903 20:38:56.016642       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: volume is bound to claim azuredisk-7578/pvc-lj7pm
I0903 20:38:56.016793       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: claim azuredisk-7578/pvc-lj7pm not found
I0903 20:38:56.016903       1 pv_controller.go:1108] reclaimVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: policy is Delete
I0903 20:38:56.017040       1 pv_controller.go:1753] scheduleOperation[delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]]
I0903 20:38:56.017192       1 pv_controller.go:1764] operation "delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]" is already running, skipping
I0903 20:38:56.017216       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" with version 3834
I0903 20:38:56.017469       1 pv_controller.go:879] volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" entered phase "Failed"
I0903 20:38:56.017609       1 pv_controller.go:901] volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94) since it's in attaching or detaching state
E0903 20:38:56.017685       1 goroutinemap.go:150] Operation for "delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]" failed. No retries permitted until 2022-09-03 20:38:56.517660854 +0000 UTC m=+1491.749868747 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94) since it's in attaching or detaching state"
I0903 20:38:56.017806       1 event.go:291] "Event occurred" object="pvc-140e723b-90be-42c6-a322-5026194a3f94" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94) since it's in attaching or detaching state"
I0903 20:38:58.001023       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" startTime="2022-09-03 20:38:58.000958667 +0000 UTC m=+1493.233166560"
I0903 20:38:58.001062       1 deployment_controller.go:583] "Deployment has been deleted" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9"
I0903 20:38:58.001101       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-fgmt9" duration="108.101µs"
I0903 20:38:58.202115       1 gc_controller.go:161] GC'ing orphaned
I0903 20:38:58.202150       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 20:38:59.138347       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 9 items received
... skipping 3 lines ...
I0903 20:39:01.343433       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94 was detached from node:capz-9buiac-mp-0000000
I0903 20:39:01.343609       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94") on node "capz-9buiac-mp-0000000" 
I0903 20:39:02.126399       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 7 items received
I0903 20:39:03.176563       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 20:39:03.212558       1 pv_controller_base.go:528] resyncing PV controller
I0903 20:39:03.212611       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" with version 3834
I0903 20:39:03.212660       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase: Failed, bound to: "azuredisk-7578/pvc-lj7pm (uid: 140e723b-90be-42c6-a322-5026194a3f94)", boundByController: true
I0903 20:39:03.212693       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: volume is bound to claim azuredisk-7578/pvc-lj7pm
I0903 20:39:03.212712       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: claim azuredisk-7578/pvc-lj7pm not found
I0903 20:39:03.212720       1 pv_controller.go:1108] reclaimVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: policy is Delete
I0903 20:39:03.212746       1 pv_controller.go:1753] scheduleOperation[delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]]
I0903 20:39:03.212781       1 pv_controller.go:1232] deleteVolumeOperation [pvc-140e723b-90be-42c6-a322-5026194a3f94] started
I0903 20:39:03.233995       1 pv_controller.go:1341] isVolumeReleased[pvc-140e723b-90be-42c6-a322-5026194a3f94]: volume is released
... skipping 3 lines ...
I0903 20:39:08.426015       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-9buiac/providers/Microsoft.Compute/disks/capz-9buiac-dynamic-pvc-140e723b-90be-42c6-a322-5026194a3f94
I0903 20:39:08.426054       1 pv_controller.go:1436] volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" deleted
I0903 20:39:08.426092       1 pv_controller.go:1284] deleteVolumeOperation [pvc-140e723b-90be-42c6-a322-5026194a3f94]: success
I0903 20:39:08.434647       1 pv_protection_controller.go:205] Got event on PV pvc-140e723b-90be-42c6-a322-5026194a3f94
I0903 20:39:08.434812       1 pv_protection_controller.go:125] Processing PV pvc-140e723b-90be-42c6-a322-5026194a3f94
I0903 20:39:08.435262       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-140e723b-90be-42c6-a322-5026194a3f94" with version 3853
I0903 20:39:08.435531       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: phase: Failed, bound to: "azuredisk-7578/pvc-lj7pm (uid: 140e723b-90be-42c6-a322-5026194a3f94)", boundByController: true
I0903 20:39:08.435671       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: volume is bound to claim azuredisk-7578/pvc-lj7pm
I0903 20:39:08.435821       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: claim azuredisk-7578/pvc-lj7pm not found
I0903 20:39:08.435839       1 pv_controller.go:1108] reclaimVolume[pvc-140e723b-90be-42c6-a322-5026194a3f94]: policy is Delete
I0903 20:39:08.435856       1 pv_controller.go:1753] scheduleOperation[delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]]
I0903 20:39:08.436026       1 pv_controller.go:1764] operation "delete-pvc-140e723b-90be-42c6-a322-5026194a3f94[ad7154cd-2bf1-41fd-affe-d763ef69f91c]" is already running, skipping
I0903 20:39:08.443646       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-140e723b-90be-42c6-a322-5026194a3f94
... skipping 104 lines ...
I0903 20:39:19.853316       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4657, name default-token-fwhrp, uid 868280f8-6c9c-4f81-a698-e85f6117d64d, event type delete
I0903 20:39:19.885223       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4657" (2.5µs)
I0903 20:39:19.885270       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4657, estimate: 0, errors: <nil>
I0903 20:39:19.893463       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4657" (149.099457ms)
I0903 20:39:21.161490       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1359
I0903 20:39:21.189898       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1359, name default-token-96t8v, uid f5bcbd19-6a18-4825-8d19-55ebdf7a63ed, event type delete
E0903 20:39:21.202594       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1359/default: secrets "default-token-bsfp5" is forbidden: unable to create new content in namespace azuredisk-1359 because it is being terminated
I0903 20:39:21.253005       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1359/default), service account deleted, removing tokens
I0903 20:39:21.253079       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (2.5µs)
I0903 20:39:21.253112       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1359, name default, uid 629cad54-a514-4638-93b7-8f87b01ce234, event type delete
I0903 20:39:21.277894       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1359, name kube-root-ca.crt, uid 27cf9c42-70e1-4008-a28e-c77eef5aa68a, event type delete
I0903 20:39:21.279266       1 publisher.go:181] Finished syncing namespace "azuredisk-1359" (1.174913ms)
I0903 20:39:21.318519       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1359, estimate: 0, errors: <nil>
... skipping 1032 lines ...
I0903 20:42:31.107388       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9103" (143.009645ms)
2022/09/03 20:42:32 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1444.889 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-zkcsd, container manager
STEP: Dumping workload cluster default/capz-9buiac logs
Sep  3 20:44:06.327: INFO: Collecting logs for Linux node capz-9buiac-control-plane-2xbb8 in cluster capz-9buiac in namespace default

Sep  3 20:45:06.329: INFO: Collecting boot logs for AzureMachine capz-9buiac-control-plane-2xbb8

Failed to get logs for machine capz-9buiac-control-plane-lczqh, cluster default/capz-9buiac: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 20:45:07.517: INFO: Collecting logs for Linux node capz-9buiac-mp-0000000 in cluster capz-9buiac in namespace default

Sep  3 20:46:07.520: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-9buiac-mp-0

Sep  3 20:46:08.163: INFO: Collecting logs for Linux node capz-9buiac-mp-0000001 in cluster capz-9buiac in namespace default

Sep  3 20:47:08.165: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-9buiac-mp-0

Failed to get logs for machine pool capz-9buiac-mp-0, cluster default/capz-9buiac: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-9buiac kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-node-gwdzg
STEP: Collecting events for Pod kube-system/etcd-capz-9buiac-control-plane-2xbb8
STEP: Creating log watcher for controller kube-system/kube-proxy-chmht, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-8mvrb, container calico-node
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-w6d9l, container calico-kube-controllers
STEP: failed to find events of Pod "etcd-capz-9buiac-control-plane-2xbb8"
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-9buiac-control-plane-2xbb8
STEP: failed to find events of Pod "kube-controller-manager-capz-9buiac-control-plane-2xbb8"
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-9buiac-control-plane-2xbb8
STEP: failed to find events of Pod "kube-scheduler-capz-9buiac-control-plane-2xbb8"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-9buiac-control-plane-2xbb8, container kube-apiserver
STEP: Fetching kube-system pod logs took 1.131170996s
STEP: Dumping workload cluster default/capz-9buiac Azure activity log
STEP: Collecting events for Pod kube-system/kube-proxy-chmht
STEP: Creating log watcher for controller kube-system/kube-proxy-k7pcr, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-9buiac-control-plane-2xbb8, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-9buiac-control-plane-2xbb8
STEP: failed to find events of Pod "kube-apiserver-capz-9buiac-control-plane-2xbb8"
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-g67jq
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-9buiac-control-plane-2xbb8, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/etcd-capz-9buiac-control-plane-2xbb8, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-w4tf9, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-w4tf9
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-g67jq, container coredns
... skipping 26 lines ...