This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-03 04:39
Elapsed50m6s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 626 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 137 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-m0dmyx-kubeconfig; do sleep 1; done"
capz-m0dmyx-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-m0dmyx-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-m0dmyx-control-plane-csq9p   NotReady   control-plane,master   3s    v1.21.15-rc.0.4+2fef630dd216dd
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-m0dmyx-control-plane-csq9p condition met
node/capz-m0dmyx-md-0-4f2wx condition met
... skipping 100 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  3 04:57:05.762: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.595 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 04:57:09.071: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-knjh5" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Sep  3 04:57:09.133: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 62.061609ms
Sep  3 04:57:11.196: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125381187s
Sep  3 04:57:13.270: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199623968s
Sep  3 04:57:15.334: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.263396258s
Sep  3 04:57:17.398: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32674313s
Sep  3 04:57:19.462: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390717792s
... skipping 3 lines ...
Sep  3 04:57:27.718: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.647069064s
Sep  3 04:57:29.781: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.710141891s
Sep  3 04:57:31.848: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.777324368s
Sep  3 04:57:33.916: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Pending", Reason="", readiness=false. Elapsed: 24.845328425s
Sep  3 04:57:35.983: INFO: Pod "azuredisk-volume-tester-knjh5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.911929156s
STEP: Saw pod success
Sep  3 04:57:35.983: INFO: Pod "azuredisk-volume-tester-knjh5" satisfied condition "Succeeded or Failed"
Sep  3 04:57:35.983: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-knjh5"
Sep  3 04:57:36.058: INFO: Pod azuredisk-volume-tester-knjh5 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-knjh5 in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 04:57:36.260: INFO: deleting PVC "azuredisk-1353"/"pvc-flmzh"
Sep  3 04:57:36.260: INFO: Deleting PersistentVolumeClaim "pvc-flmzh"
STEP: waiting for claim's PV "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" to be deleted
Sep  3 04:57:36.325: INFO: Waiting up to 10m0s for PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce to get deleted
Sep  3 04:57:36.387: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Released (61.728232ms)
Sep  3 04:57:41.453: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (5.127977553s)
Sep  3 04:57:46.518: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (10.192447429s)
Sep  3 04:57:51.584: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (15.258751882s)
Sep  3 04:57:56.651: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (20.325308703s)
Sep  3 04:58:01.718: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (25.392603493s)
Sep  3 04:58:06.782: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (30.456251792s)
Sep  3 04:58:11.846: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce found and phase=Failed (35.520691878s)
Sep  3 04:58:16.914: INFO: PersistentVolume pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce was removed
Sep  3 04:58:16.914: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  3 04:58:16.976: INFO: Claim "azuredisk-1353" in namespace "pvc-flmzh" doesn't exist in the system
Sep  3 04:58:16.976: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-nhdm5
Sep  3 04:58:17.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  3 04:58:42.390: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-2qldr"
Sep  3 04:58:42.455: INFO: Error getting logs for pod azuredisk-volume-tester-2qldr: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-2qldr)
STEP: Deleting pod azuredisk-volume-tester-2qldr in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 04:58:42.644: INFO: deleting PVC "azuredisk-1563"/"pvc-qcnl5"
Sep  3 04:58:42.644: INFO: Deleting PersistentVolumeClaim "pvc-qcnl5"
STEP: waiting for claim's PV "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" to be deleted
Sep  3 04:58:42.711: INFO: Waiting up to 10m0s for PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 to get deleted
Sep  3 04:58:42.773: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Bound (62.030372ms)
Sep  3 04:58:47.838: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Bound (5.127365845s)
Sep  3 04:58:52.903: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Bound (10.191527896s)
Sep  3 04:58:57.970: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (15.258869807s)
Sep  3 04:59:03.035: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (20.324334278s)
Sep  3 04:59:08.103: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (25.391918822s)
Sep  3 04:59:13.170: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (30.458870383s)
Sep  3 04:59:18.235: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (35.52348516s)
Sep  3 04:59:23.302: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (40.591270026s)
Sep  3 04:59:28.365: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 found and phase=Failed (45.654055483s)
Sep  3 04:59:33.430: INFO: PersistentVolume pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 was removed
Sep  3 04:59:33.430: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Sep  3 04:59:33.493: INFO: Claim "azuredisk-1563" in namespace "pvc-qcnl5" doesn't exist in the system
Sep  3 04:59:33.493: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-jf8xh
Sep  3 04:59:33.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 04:59:34.782: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-74ztw" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Sep  3 04:59:34.844: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 62.517376ms
Sep  3 04:59:36.908: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126219143s
Sep  3 04:59:38.972: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190613848s
Sep  3 04:59:41.036: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254340351s
Sep  3 04:59:43.099: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.317605561s
Sep  3 04:59:45.164: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.38187481s
Sep  3 04:59:47.228: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.44564641s
Sep  3 04:59:49.291: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.509601842s
Sep  3 04:59:51.356: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.57451227s
Sep  3 04:59:53.420: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.6383224s
Sep  3 04:59:55.489: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.706927559s
Sep  3 04:59:57.556: INFO: Pod "azuredisk-volume-tester-74ztw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.77387414s
STEP: Saw pod success
Sep  3 04:59:57.556: INFO: Pod "azuredisk-volume-tester-74ztw" satisfied condition "Succeeded or Failed"
Sep  3 04:59:57.556: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-74ztw"
Sep  3 04:59:57.631: INFO: Pod azuredisk-volume-tester-74ztw has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-74ztw in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 04:59:57.834: INFO: deleting PVC "azuredisk-7463"/"pvc-t5rhc"
Sep  3 04:59:57.834: INFO: Deleting PersistentVolumeClaim "pvc-t5rhc"
STEP: waiting for claim's PV "pvc-e881165f-aed6-4043-89c6-f4d286053e70" to be deleted
Sep  3 04:59:57.899: INFO: Waiting up to 10m0s for PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 to get deleted
Sep  3 04:59:57.972: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (72.979763ms)
Sep  3 05:00:03.037: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (5.137589445s)
Sep  3 05:00:08.101: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (10.201763771s)
Sep  3 05:00:13.168: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (15.268451913s)
Sep  3 05:00:18.235: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (20.335424421s)
Sep  3 05:00:23.302: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (25.402830056s)
Sep  3 05:00:28.369: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 found and phase=Failed (30.469924002s)
Sep  3 05:00:33.437: INFO: PersistentVolume pvc-e881165f-aed6-4043-89c6-f4d286053e70 was removed
Sep  3 05:00:33.437: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7463 to be removed
Sep  3 05:00:33.500: INFO: Claim "azuredisk-7463" in namespace "pvc-t5rhc" doesn't exist in the system
Sep  3 05:00:33.500: INFO: deleting StorageClass azuredisk-7463-kubernetes.io-azure-disk-dynamic-sc-2fhzn
Sep  3 05:00:33.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7463" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  3 05:00:34.807: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-5cpv5" in namespace "azuredisk-9241" to be "Error status code"
Sep  3 05:00:34.876: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 68.869794ms
Sep  3 05:00:36.939: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132668782s
Sep  3 05:00:39.004: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.19697858s
Sep  3 05:00:41.067: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.260423978s
Sep  3 05:00:43.132: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.325250574s
Sep  3 05:00:45.197: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390482992s
Sep  3 05:00:47.261: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.454531151s
Sep  3 05:00:49.326: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.518734365s
Sep  3 05:00:51.391: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.58397779s
Sep  3 05:00:53.455: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.647830352s
Sep  3 05:00:55.521: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.714459613s
Sep  3 05:00:57.590: INFO: Pod "azuredisk-volume-tester-5cpv5": Phase="Failed", Reason="", readiness=false. Elapsed: 22.783189274s
STEP: Saw pod failure
Sep  3 05:00:57.590: INFO: Pod "azuredisk-volume-tester-5cpv5" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  3 05:00:57.654: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-5cpv5"
Sep  3 05:00:57.718: INFO: Pod azuredisk-volume-tester-5cpv5 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-5cpv5 in namespace azuredisk-9241
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 05:00:57.917: INFO: deleting PVC "azuredisk-9241"/"pvc-5tvxc"
Sep  3 05:00:57.917: INFO: Deleting PersistentVolumeClaim "pvc-5tvxc"
STEP: waiting for claim's PV "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" to be deleted
Sep  3 05:00:57.981: INFO: Waiting up to 10m0s for PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c to get deleted
Sep  3 05:00:58.043: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Released (62.442184ms)
Sep  3 05:01:03.110: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Failed (5.12873385s)
Sep  3 05:01:08.174: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Failed (10.193130267s)
Sep  3 05:01:13.238: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Failed (15.257441747s)
Sep  3 05:01:18.305: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Failed (20.324240728s)
Sep  3 05:01:23.370: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Failed (25.38878872s)
Sep  3 05:01:28.435: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c found and phase=Failed (30.45398469s)
Sep  3 05:01:33.498: INFO: PersistentVolume pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c was removed
Sep  3 05:01:33.498: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  3 05:01:33.561: INFO: Claim "azuredisk-9241" in namespace "pvc-5tvxc" doesn't exist in the system
Sep  3 05:01:33.561: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-8v2j9
Sep  3 05:01:33.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9241" for this suite.
... skipping 53 lines ...
Sep  3 05:02:39.058: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Bound (5.129871539s)
Sep  3 05:02:44.123: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Bound (10.194868758s)
Sep  3 05:02:49.190: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Bound (15.261749955s)
Sep  3 05:02:54.258: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Bound (20.330407051s)
Sep  3 05:02:59.325: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Bound (25.39670788s)
Sep  3 05:03:04.389: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Bound (30.46121681s)
Sep  3 05:03:09.452: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (35.523906524s)
Sep  3 05:03:14.519: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (40.590748673s)
Sep  3 05:03:19.582: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (45.653937261s)
Sep  3 05:03:24.648: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (50.719923041s)
Sep  3 05:03:29.711: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (55.783003566s)
Sep  3 05:03:34.775: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (1m0.846924079s)
Sep  3 05:03:39.842: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (1m5.914277107s)
Sep  3 05:03:44.905: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 found and phase=Failed (1m10.977486992s)
Sep  3 05:03:49.969: INFO: PersistentVolume pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 was removed
Sep  3 05:03:49.969: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  3 05:03:50.034: INFO: Claim "azuredisk-9336" in namespace "pvc-9jllz" doesn't exist in the system
Sep  3 05:03:50.034: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-h7sdb
Sep  3 05:03:50.100: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-bjf4j"
Sep  3 05:03:50.168: INFO: Pod azuredisk-volume-tester-bjf4j has the following logs: 
... skipping 10 lines ...
Sep  3 05:04:05.682: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Bound (15.259336007s)
Sep  3 05:04:10.749: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Bound (20.32637213s)
Sep  3 05:04:15.816: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Bound (25.392829039s)
Sep  3 05:04:20.878: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Bound (30.455578837s)
Sep  3 05:04:25.945: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Bound (35.521995133s)
Sep  3 05:04:31.009: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Bound (40.586165344s)
Sep  3 05:04:36.072: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Failed (45.648651037s)
Sep  3 05:04:41.138: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 found and phase=Failed (50.715563294s)
Sep  3 05:04:46.206: INFO: PersistentVolume pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817 was removed
Sep  3 05:04:46.207: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  3 05:04:46.270: INFO: Claim "azuredisk-9336" in namespace "pvc-bvk6q" doesn't exist in the system
Sep  3 05:04:46.270: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-wm6v8
Sep  3 05:04:46.334: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-2d82q"
Sep  3 05:04:46.409: INFO: Pod azuredisk-volume-tester-2d82q has the following logs: 
... skipping 9 lines ...
Sep  3 05:04:56.860: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Bound (10.194738444s)
Sep  3 05:05:01.928: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Bound (15.262143904s)
Sep  3 05:05:06.993: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Bound (20.327430149s)
Sep  3 05:05:12.056: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Bound (25.39093295s)
Sep  3 05:05:17.120: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Bound (30.454194709s)
Sep  3 05:05:22.186: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Bound (35.520012631s)
Sep  3 05:05:27.248: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Failed (40.582746779s)
Sep  3 05:05:32.326: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Failed (45.660238165s)
Sep  3 05:05:37.388: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Failed (50.722675495s)
Sep  3 05:05:42.454: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 found and phase=Failed (55.788590888s)
Sep  3 05:05:47.521: INFO: PersistentVolume pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 was removed
Sep  3 05:05:47.521: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  3 05:05:47.583: INFO: Claim "azuredisk-9336" in namespace "pvc-znzmh" doesn't exist in the system
Sep  3 05:05:47.583: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-6qgqr
Sep  3 05:05:47.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9336" for this suite.
... skipping 62 lines ...
Sep  3 05:08:36.772: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Bound (5.125208698s)
Sep  3 05:08:41.838: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Bound (10.190587089s)
Sep  3 05:08:46.902: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Bound (15.255135168s)
Sep  3 05:08:51.966: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Bound (20.31845351s)
Sep  3 05:08:57.028: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Bound (25.381175502s)
Sep  3 05:09:02.092: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Bound (30.444528874s)
Sep  3 05:09:07.159: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Failed (35.512116232s)
Sep  3 05:09:12.223: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Failed (40.576232306s)
Sep  3 05:09:17.290: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Failed (45.642538874s)
Sep  3 05:09:22.353: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Failed (50.705418597s)
Sep  3 05:09:27.419: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 found and phase=Failed (55.77227229s)
Sep  3 05:09:32.482: INFO: PersistentVolume pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 was removed
Sep  3 05:09:32.483: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2205 to be removed
Sep  3 05:09:32.544: INFO: Claim "azuredisk-2205" in namespace "pvc-r2l7t" doesn't exist in the system
Sep  3 05:09:32.544: INFO: deleting StorageClass azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-qxm9n
Sep  3 05:09:32.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2205" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 05:09:52.423: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2829j" in namespace "azuredisk-1387" to be "Succeeded or Failed"
Sep  3 05:09:52.485: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 62.158053ms
Sep  3 05:09:54.547: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124531454s
Sep  3 05:09:56.614: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191168313s
Sep  3 05:09:58.681: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.258480474s
Sep  3 05:10:00.748: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.32549728s
Sep  3 05:10:02.816: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.392743595s
... skipping 9 lines ...
Sep  3 05:10:23.493: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 31.070594798s
Sep  3 05:10:25.561: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 33.13782411s
Sep  3 05:10:27.627: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 35.204227917s
Sep  3 05:10:29.693: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Pending", Reason="", readiness=false. Elapsed: 37.270546964s
Sep  3 05:10:31.760: INFO: Pod "azuredisk-volume-tester-2829j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.337455076s
STEP: Saw pod success
Sep  3 05:10:31.760: INFO: Pod "azuredisk-volume-tester-2829j" satisfied condition "Succeeded or Failed"
Sep  3 05:10:31.760: INFO: deleting Pod "azuredisk-1387"/"azuredisk-volume-tester-2829j"
Sep  3 05:10:31.837: INFO: Pod azuredisk-volume-tester-2829j has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-2829j in namespace azuredisk-1387
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 05:10:32.032: INFO: deleting PVC "azuredisk-1387"/"pvc-jvnph"
Sep  3 05:10:32.032: INFO: Deleting PersistentVolumeClaim "pvc-jvnph"
STEP: waiting for claim's PV "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" to be deleted
Sep  3 05:10:32.096: INFO: Waiting up to 10m0s for PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b to get deleted
Sep  3 05:10:32.161: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (65.008248ms)
Sep  3 05:10:37.228: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (5.132575006s)
Sep  3 05:10:42.292: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (10.196071034s)
Sep  3 05:10:47.357: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (15.261455581s)
Sep  3 05:10:52.422: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (20.325723724s)
Sep  3 05:10:57.488: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (25.392536706s)
Sep  3 05:11:02.552: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (30.455881637s)
Sep  3 05:11:07.618: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (35.521967873s)
Sep  3 05:11:12.685: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (40.589245285s)
Sep  3 05:11:17.749: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (45.652745389s)
Sep  3 05:11:22.813: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (50.71678185s)
Sep  3 05:11:27.876: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (55.780140492s)
Sep  3 05:11:32.940: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (1m0.843945818s)
Sep  3 05:11:38.004: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (1m5.907826083s)
Sep  3 05:11:43.069: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b found and phase=Failed (1m10.973188422s)
Sep  3 05:11:48.136: INFO: PersistentVolume pvc-0075905b-eb04-48f3-8bce-d66e04d8702b was removed
Sep  3 05:11:48.136: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  3 05:11:48.198: INFO: Claim "azuredisk-1387" in namespace "pvc-jvnph" doesn't exist in the system
Sep  3 05:11:48.198: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-lb9w6
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 05:12:10.584: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-jllnh" in namespace "azuredisk-4547" to be "Succeeded or Failed"
Sep  3 05:12:10.649: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 64.950325ms
Sep  3 05:12:12.715: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131276295s
Sep  3 05:12:14.785: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200455248s
Sep  3 05:12:16.854: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.270005069s
Sep  3 05:12:18.924: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340085026s
Sep  3 05:12:20.994: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.409900524s
... skipping 9 lines ...
Sep  3 05:12:41.702: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 31.117676186s
Sep  3 05:12:43.771: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 33.186682646s
Sep  3 05:12:45.840: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 35.255547146s
Sep  3 05:12:47.910: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Pending", Reason="", readiness=false. Elapsed: 37.325568251s
Sep  3 05:12:49.979: INFO: Pod "azuredisk-volume-tester-jllnh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 39.394818729s
STEP: Saw pod success
Sep  3 05:12:49.979: INFO: Pod "azuredisk-volume-tester-jllnh" satisfied condition "Succeeded or Failed"
Sep  3 05:12:49.979: INFO: deleting Pod "azuredisk-4547"/"azuredisk-volume-tester-jllnh"
Sep  3 05:12:50.066: INFO: Pod azuredisk-volume-tester-jllnh has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.051281 seconds, 1.9GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  3 05:12:50.323: INFO: deleting PVC "azuredisk-4547"/"pvc-2pss4"
Sep  3 05:12:50.323: INFO: Deleting PersistentVolumeClaim "pvc-2pss4"
STEP: waiting for claim's PV "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" to be deleted
Sep  3 05:12:50.390: INFO: Waiting up to 10m0s for PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 to get deleted
Sep  3 05:12:50.455: INFO: PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 found and phase=Released (64.992779ms)
Sep  3 05:12:55.521: INFO: PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 found and phase=Failed (5.131388216s)
Sep  3 05:13:00.584: INFO: PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 found and phase=Failed (10.194327191s)
Sep  3 05:13:05.651: INFO: PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 found and phase=Failed (15.260549232s)
Sep  3 05:13:10.717: INFO: PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 found and phase=Failed (20.327487205s)
Sep  3 05:13:15.780: INFO: PersistentVolume pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841 was removed
Sep  3 05:13:15.780: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-4547 to be removed
Sep  3 05:13:15.846: INFO: Claim "azuredisk-4547" in namespace "pvc-2pss4" doesn't exist in the system
Sep  3 05:13:15.846: INFO: deleting StorageClass azuredisk-4547-kubernetes.io-azure-disk-dynamic-sc-lqmnz
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 05:13:16.039: INFO: deleting PVC "azuredisk-4547"/"pvc-8gx84"
Sep  3 05:13:16.039: INFO: Deleting PersistentVolumeClaim "pvc-8gx84"
STEP: waiting for claim's PV "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" to be deleted
Sep  3 05:13:16.102: INFO: Waiting up to 10m0s for PersistentVolume pvc-2a37474d-4985-4c47-aee6-d4c86be29b89 to get deleted
Sep  3 05:13:16.163: INFO: PersistentVolume pvc-2a37474d-4985-4c47-aee6-d4c86be29b89 found and phase=Failed (61.74699ms)
Sep  3 05:13:21.228: INFO: PersistentVolume pvc-2a37474d-4985-4c47-aee6-d4c86be29b89 found and phase=Failed (5.126568936s)
Sep  3 05:13:26.295: INFO: PersistentVolume pvc-2a37474d-4985-4c47-aee6-d4c86be29b89 found and phase=Failed (10.193406916s)
Sep  3 05:13:31.357: INFO: PersistentVolume pvc-2a37474d-4985-4c47-aee6-d4c86be29b89 was removed
Sep  3 05:13:31.357: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-4547 to be removed
Sep  3 05:13:31.419: INFO: Claim "azuredisk-4547" in namespace "pvc-8gx84" doesn't exist in the system
Sep  3 05:13:31.419: INFO: deleting StorageClass azuredisk-4547-kubernetes.io-azure-disk-dynamic-sc-5t2d7
Sep  3 05:13:31.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-4547" for this suite.
... skipping 85 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  3 05:13:34.690: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rkp8n" in namespace "azuredisk-7578" to be "Succeeded or Failed"
Sep  3 05:13:34.752: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 61.627568ms
Sep  3 05:13:36.814: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123850528s
Sep  3 05:13:38.881: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190164476s
Sep  3 05:13:40.947: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.256966196s
Sep  3 05:13:43.014: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.323470064s
Sep  3 05:13:45.081: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390130222s
... skipping 8 lines ...
Sep  3 05:14:03.682: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 28.991918027s
Sep  3 05:14:05.750: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 31.059681495s
Sep  3 05:14:07.817: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 33.126922018s
Sep  3 05:14:09.885: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Pending", Reason="", readiness=false. Elapsed: 35.194329253s
Sep  3 05:14:11.952: INFO: Pod "azuredisk-volume-tester-rkp8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 37.261332579s
STEP: Saw pod success
Sep  3 05:14:11.952: INFO: Pod "azuredisk-volume-tester-rkp8n" satisfied condition "Succeeded or Failed"
Sep  3 05:14:11.952: INFO: deleting Pod "azuredisk-7578"/"azuredisk-volume-tester-rkp8n"
Sep  3 05:14:12.041: INFO: Pod azuredisk-volume-tester-rkp8n has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-rkp8n in namespace azuredisk-7578
STEP: validating provisioned PV
STEP: checking the PV
Sep  3 05:14:12.240: INFO: deleting PVC "azuredisk-7578"/"pvc-rbkbw"
Sep  3 05:14:12.240: INFO: Deleting PersistentVolumeClaim "pvc-rbkbw"
STEP: waiting for claim's PV "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" to be deleted
Sep  3 05:14:12.303: INFO: Waiting up to 10m0s for PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a to get deleted
Sep  3 05:14:12.365: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (61.62117ms)
Sep  3 05:14:17.432: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (5.128464025s)
Sep  3 05:14:22.496: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (10.193081778s)
Sep  3 05:14:27.560: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (15.25692023s)
Sep  3 05:14:32.626: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (20.322777958s)
Sep  3 05:14:37.698: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (25.394877398s)
Sep  3 05:14:42.763: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (30.459866065s)
Sep  3 05:14:47.829: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (35.525454107s)
Sep  3 05:14:52.895: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (40.592217997s)
Sep  3 05:14:57.962: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (45.658665912s)
Sep  3 05:15:03.025: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (50.722258393s)
Sep  3 05:15:08.089: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (55.786400365s)
Sep  3 05:15:13.153: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a found and phase=Failed (1m0.849776403s)
Sep  3 05:15:18.216: INFO: PersistentVolume pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a was removed
Sep  3 05:15:18.216: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  3 05:15:18.278: INFO: Claim "azuredisk-7578" in namespace "pvc-rbkbw" doesn't exist in the system
Sep  3 05:15:18.278: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-hltgb
STEP: validating provisioned PV
STEP: checking the PV
... skipping 512 lines ...
I0903 04:52:58.850098       1 tlsconfig.go:178] loaded client CA [1/"client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt"]: "kubernetes" [] issuer="<self>" (2022-09-03 04:46:03 +0000 UTC to 2032-08-31 04:51:03 +0000 UTC (now=2022-09-03 04:52:58.850082059 +0000 UTC))
I0903 04:52:58.850479       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662180778" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662180777" (2022-09-03 03:52:57 +0000 UTC to 2023-09-03 03:52:57 +0000 UTC (now=2022-09-03 04:52:58.850462039 +0000 UTC))
I0903 04:52:58.850898       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662180778" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662180778" (2022-09-03 03:52:58 +0000 UTC to 2023-09-03 03:52:58 +0000 UTC (now=2022-09-03 04:52:58.850880017 +0000 UTC))
I0903 04:52:58.851085       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0903 04:52:58.851247       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0903 04:52:58.851984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0903 04:53:01.306196       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0903 04:53:01.306237       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0903 04:53:04.903411       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0903 04:53:04.903476       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-m0dmyx-control-plane-csq9p_aa5bb81d-798d-4f11-99d9-6a6edc392aac became leader"
I0903 04:53:05.023867       1 request.go:600] Waited for 95.707965ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1?timeout=32s
I0903 04:53:05.074130       1 request.go:600] Waited for 145.951437ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0903 04:53:05.123813       1 request.go:600] Waited for 195.618341ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1?timeout=32s
I0903 04:53:05.174223       1 request.go:600] Waited for 246.021434ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1beta1?timeout=32s
... skipping 39 lines ...
I0903 04:53:05.479969       1 reflector.go:219] Starting reflector *v1.Node (22h45m2.519721986s) from k8s.io/client-go/informers/factory.go:134
I0903 04:53:05.479989       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0903 04:53:05.480200       1 reflector.go:219] Starting reflector *v1.ServiceAccount (22h45m2.519721986s) from k8s.io/client-go/informers/factory.go:134
I0903 04:53:05.480561       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0903 04:53:05.480553       1 reflector.go:219] Starting reflector *v1.Secret (22h45m2.519721986s) from k8s.io/client-go/informers/factory.go:134
I0903 04:53:05.480809       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0903 04:53:05.518375       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0903 04:53:05.518561       1 controllermanager.go:559] Starting "pv-protection"
I0903 04:53:05.538824       1 controllermanager.go:574] Started "pv-protection"
I0903 04:53:05.538849       1 controllermanager.go:559] Starting "ephemeral-volume"
I0903 04:53:05.539042       1 pv_protection_controller.go:83] Starting PV protection controller
I0903 04:53:05.539060       1 shared_informer.go:240] Waiting for caches to sync for PV protection
I0903 04:53:05.555374       1 controllermanager.go:574] Started "ephemeral-volume"
... skipping 50 lines ...
I0903 04:53:05.681936       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0903 04:53:05.681945       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0903 04:53:05.681965       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0903 04:53:05.681982       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0903 04:53:05.681991       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0903 04:53:05.682001       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0903 04:53:05.682032       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 04:53:05.682039       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0903 04:53:05.682096       1 controllermanager.go:574] Started "persistentvolume-binder"
I0903 04:53:05.682107       1 controllermanager.go:559] Starting "resourcequota"
I0903 04:53:05.682222       1 pv_controller_base.go:308] Starting persistent volume controller
I0903 04:53:05.682233       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0903 04:53:05.770538       1 resource_quota_monitor.go:177] QuotaMonitor using a shared informer for resource "/v1, Resource=pods"
... skipping 135 lines ...
I0903 04:53:08.382478       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0903 04:53:08.382497       1 graph_builder.go:273] garbage controller monitor not synced: no monitors
I0903 04:53:08.382541       1 graph_builder.go:289] GraphBuilder running
I0903 04:53:08.479869       1 request.go:600] Waited for 97.092235ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector
I0903 04:53:08.528966       1 request.go:600] Waited for 97.512512ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system
I0903 04:53:08.579090       1 request.go:600] Waited for 96.963642ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector/token
W0903 04:53:08.606079       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0903 04:53:08.606355       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events extensions/v1beta1, Resource=ingresses flowcontrol.apiserver.k8s.io/v1beta1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta1, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []
I0903 04:53:08.606374       1 garbagecollector.go:219] reset restmapper
I0903 04:53:08.629575       1 request.go:600] Waited for 98.528455ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts
I0903 04:53:08.632536       1 controllermanager.go:574] Started "ttl"
I0903 04:53:08.632558       1 controllermanager.go:559] Starting "cloud-node-lifecycle"
I0903 04:53:08.632700       1 ttl_controller.go:121] Starting TTL controller
... skipping 18 lines ...
I0903 04:53:08.932803       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0903 04:53:08.932814       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0903 04:53:08.932824       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0903 04:53:08.932839       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0903 04:53:08.932850       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0903 04:53:08.932864       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0903 04:53:08.932889       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0903 04:53:08.932900       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0903 04:53:08.933010       1 controllermanager.go:574] Started "attachdetach"
I0903 04:53:08.933027       1 controllermanager.go:559] Starting "replicationcontroller"
I0903 04:53:08.933065       1 attach_detach_controller.go:328] Starting attach detach controller
I0903 04:53:08.933078       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0903 04:53:08.933122       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-control-plane-csq9p"
W0903 04:53:08.933155       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-m0dmyx-control-plane-csq9p" does not exist
I0903 04:53:09.083260       1 controllermanager.go:574] Started "replicationcontroller"
I0903 04:53:09.083751       1 controllermanager.go:559] Starting "namespace"
I0903 04:53:09.083713       1 replica_set.go:182] Starting replicationcontroller controller
I0903 04:53:09.084313       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I0903 04:53:09.240455       1 azure_backoff.go:109] VirtualMachinesClient.List(capz-m0dmyx) success
E0903 04:53:09.346266       1 namespaced_resources_deleter.go:161] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
... skipping 420 lines ...
I0903 04:53:10.659083       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0903 04:53:10.659995       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 04:53:10.644111514 +0000 UTC m=+13.550248918 - now: 2022-09-03 04:53:10.659987587 +0000 UTC m=+13.566125091]
I0903 04:53:10.660311       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-8c95fb79b to 1"
I0903 04:53:10.664259       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-03 04:53:10.65994539 +0000 UTC m=+13.566082794 - now: 2022-09-03 04:53:10.664254138 +0000 UTC m=+13.570391542]
I0903 04:53:10.664442       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0903 04:53:10.674672       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="635.744183ms"
I0903 04:53:10.674719       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0903 04:53:10.674758       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 04:53:10.674742926 +0000 UTC m=+13.580880330"
I0903 04:53:10.675518       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 04:53:10 +0000 UTC - now: 2022-09-03 04:53:10.675511881 +0000 UTC m=+13.581649285]
I0903 04:53:10.676008       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="636.343847ms"
I0903 04:53:10.676035       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0903 04:53:10.676288       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-03 04:53:10.676267337 +0000 UTC m=+13.582404841"
I0903 04:53:10.677008       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-03 04:53:10 +0000 UTC - now: 2022-09-03 04:53:10.677002094 +0000 UTC m=+13.583139498]
I0903 04:53:10.680304       1 request.go:600] Waited for 349.090519ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/endpoint-controller/token
I0903 04:53:10.680757       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0903 04:53:10.681858       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="5.580174ms"
I0903 04:53:10.681900       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-03 04:53:10.681884209 +0000 UTC m=+13.588021713"
... skipping 9 lines ...
I0903 04:53:10.684036       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="1.415317ms"
I0903 04:53:10.697289       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="22.531184ms"
I0903 04:53:10.697389       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 04:53:10.697373305 +0000 UTC m=+13.603510709"
I0903 04:53:10.698036       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 04:53:10 +0000 UTC - now: 2022-09-03 04:53:10.698030366 +0000 UTC m=+13.604167870]
I0903 04:53:10.698479       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0903 04:53:10.701957       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="4.570733ms"
I0903 04:53:10.702040       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0903 04:53:10.702117       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 04:53:10.702098129 +0000 UTC m=+13.608235533"
I0903 04:53:10.702742       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-03 04:53:10 +0000 UTC - now: 2022-09-03 04:53:10.702735392 +0000 UTC m=+13.608872896]
I0903 04:53:10.702926       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0903 04:53:10.703002       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="879.348µs"
I0903 04:53:10.705347       1 endpoints_controller.go:381] Finished syncing service "kube-system/kube-dns" endpoints. (640.196622ms)
I0903 04:53:10.705350       1 endpointslicemirroring_controller.go:273] syncEndpoints("kube-system/kube-dns")
... skipping 289 lines ...
I0903 04:53:19.548506       1 replica_set.go:439] Pod calico-kube-controllers-969cf87c4-cmj7w updated, objectMeta {Name:calico-kube-controllers-969cf87c4-cmj7w GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:21742550-941d-4ad5-9d72-fb872ecff55d ResourceVersion:574 Generation:0 CreationTimestamp:2022-09-03 04:53:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:26ab1775-badd-426f-802b-d97317069952 Controller:0xc0013ce5d0 BlockOwnerDeletion:0xc0013ce5d1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 04:53:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26ab1775-badd-426f-802b-d97317069952\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}}]} -> {Name:calico-kube-controllers-969cf87c4-cmj7w GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:21742550-941d-4ad5-9d72-fb872ecff55d ResourceVersion:577 Generation:0 CreationTimestamp:2022-09-03 04:53:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:26ab1775-badd-426f-802b-d97317069952 Controller:0xc0016e94a7 BlockOwnerDeletion:0xc0016e94a8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 04:53:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"26ab1775-badd-426f-802b-d97317069952\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-03 04:53:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]}.
I0903 04:53:19.548907       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-969cf87c4-cmj7w" podUID=21742550-941d-4ad5-9d72-fb872ecff55d
I0903 04:53:19.549070       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-cmj7w"
I0903 04:53:19.549224       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-969cf87c4-cmj7w, PodDisruptionBudget controller will avoid syncing.
I0903 04:53:19.549363       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-969cf87c4-cmj7w"
I0903 04:53:19.550524       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="38.670497ms"
I0903 04:53:19.550703       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0903 04:53:19.550899       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 04:53:19.55087727 +0000 UTC m=+22.457014674"
I0903 04:53:19.551594       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 04:53:19 +0000 UTC - now: 2022-09-03 04:53:19.551586759 +0000 UTC m=+22.457724263]
I0903 04:53:19.565582       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="14.685672ms"
I0903 04:53:19.566342       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0903 04:53:19.566515       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-03 04:53:19.566249731 +0000 UTC m=+22.472387235"
I0903 04:53:19.567199       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-03 04:53:19 +0000 UTC - now: 2022-09-03 04:53:19.567178016 +0000 UTC m=+22.473315420]
... skipping 218 lines ...
I0903 04:53:40.275509       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (13h6m43.892428418s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0903 04:53:40.275530       1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0903 04:53:40.375144       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0903 04:53:40.474410       1 shared_informer.go:270] caches populated
I0903 04:53:40.474437       1 shared_informer.go:247] Caches are synced for resource quota 
I0903 04:53:40.474446       1 resource_quota_controller.go:454] synced quota controller
W0903 04:53:40.781357       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0903 04:53:40.781625       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0903 04:53:40.781643       1 garbagecollector.go:219] reset restmapper
E0903 04:53:40.789180       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0903 04:53:40.807554       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0903 04:53:40.808512       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=bgpconfigurations", kind "crd.projectcalico.org/v1, Kind=BGPConfiguration"
I0903 04:53:40.808580       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=blockaffinities", kind "crd.projectcalico.org/v1, Kind=BlockAffinity"
... skipping 81 lines ...
I0903 04:53:44.648826       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcd55626ab3a3d, ext:47554893277, loc:(*time.Location)(0x731ea80)}}
I0903 04:53:44.648853       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcd55626acb0df, ext:47554989083, loc:(*time.Location)(0x731ea80)}}
I0903 04:53:44.648861       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0903 04:53:44.648896       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0903 04:53:44.648908       1 daemon_controller.go:1103] Updating daemon set status
I0903 04:53:44.648934       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (1.174472ms)
I0903 04:53:45.072890       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-m0dmyx-control-plane-csq9p transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 04:53:21 +0000 UTC,LastTransitionTime:2022-09-03 04:52:46 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 04:53:41 +0000 UTC,LastTransitionTime:2022-09-03 04:53:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 04:53:45.072983       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-control-plane-csq9p ReadyCondition updated. Updating timestamp.
I0903 04:53:45.073017       1 node_lifecycle_controller.go:893] Node capz-m0dmyx-control-plane-csq9p is healthy again, removing all taints
I0903 04:53:45.073144       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0903 04:53:46.297070       1 disruption.go:427] updatePod called on pod "calico-node-c9gqb"
I0903 04:53:46.297148       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-c9gqb, PodDisruptionBudget controller will avoid syncing.
I0903 04:53:46.297160       1 disruption.go:430] No matching pdb for pod "calico-node-c9gqb"
... skipping 214 lines ...
I0903 04:54:10.046808       1 gc_controller.go:161] GC'ing orphaned
I0903 04:54:10.046838       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 04:54:10.091045       1 pv_controller_base.go:528] resyncing PV controller
E0903 04:54:10.511066       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0903 04:54:10.511138       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 04:54:11.660301       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-control-plane-csq9p"
W0903 04:54:11.788093       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0903 04:54:15.078772       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-control-plane-csq9p ReadyCondition updated. Updating timestamp.
I0903 04:54:17.779499       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="72.808µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:38826" resp=200
I0903 04:54:19.987505       1 endpoints_controller.go:555] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0
I0903 04:54:19.987836       1 replica_set.go:439] Pod metrics-server-8c95fb79b-lspn8 updated, objectMeta {Name:metrics-server-8c95fb79b-lspn8 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:5a4a2f98-e0a5-41c8-bced-c2f04b633fd0 ResourceVersion:753 Generation:0 CreationTimestamp:2022-09-03 04:53:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:d7e076101cd6ee4a21a5e58e82db15bcb2abc67110913593660443d2471e2e1d cni.projectcalico.org/podIP:192.168.85.65/32 cni.projectcalico.org/podIPs:192.168.85.65/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:a733b0f6-9bcf-403c-b302-659b42f1a722 Controller:0xc001728907 BlockOwnerDeletion:0xc001728908}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 04:53:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a733b0f6-9bcf-403c-b302-659b42f1a722\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-03 04:53:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-03 04:53:50 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 04:53:55 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.85.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]} -> {Name:metrics-server-8c95fb79b-lspn8 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:5a4a2f98-e0a5-41c8-bced-c2f04b633fd0 ResourceVersion:795 Generation:0 CreationTimestamp:2022-09-03 04:53:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:d7e076101cd6ee4a21a5e58e82db15bcb2abc67110913593660443d2471e2e1d cni.projectcalico.org/podIP:192.168.85.65/32 cni.projectcalico.org/podIPs:192.168.85.65/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:a733b0f6-9bcf-403c-b302-659b42f1a722 Controller:0xc0022ab8b7 BlockOwnerDeletion:0xc0022ab8b8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 04:53:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a733b0f6-9bcf-403c-b302-659b42f1a722\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-03 04:53:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-03 04:53:50 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 04:54:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.85.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]}.
I0903 04:54:19.988025       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bcd54da732471a, ext:13563743930, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:19.988130       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-8c95fb79b, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->1, availableReplicas 0->1, sequence No: 1->1
... skipping 99 lines ...
I0903 04:54:51.835921       1 controller.go:272] Triggering nodeSync
I0903 04:54:51.835928       1 controller.go:291] nodeSync has been triggered
I0903 04:54:51.835936       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 04:54:51.835944       1 controller.go:790] Finished updateLoadBalancerHosts
I0903 04:54:51.835950       1 controller.go:731] It took 1.65e-05 seconds to finish nodeSyncInternal
I0903 04:54:51.835973       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
W0903 04:54:51.835990       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-m0dmyx-md-0-pj7kz" does not exist
I0903 04:54:51.836895       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcd54f18c39d17, ext:19321610323, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:51.838907       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcd566f2009710, ext:114745036876, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:51.838953       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-m0dmyx-md-0-pj7kz], creating 1
I0903 04:54:51.838692       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcd556925aef55, ext:49214086801, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:51.839318       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcd566f206a9c7, ext:114745434983, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:51.839456       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-m0dmyx-md-0-pj7kz], creating 1
... skipping 82 lines ...
I0903 04:54:52.729881       1 controller.go:693] Ignoring node capz-m0dmyx-md-0-4f2wx with Ready condition status False
I0903 04:54:52.729904       1 controller.go:272] Triggering nodeSync
I0903 04:54:52.729913       1 controller.go:291] nodeSync has been triggered
I0903 04:54:52.729920       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 04:54:52.729928       1 controller.go:790] Finished updateLoadBalancerHosts
I0903 04:54:52.729934       1 controller.go:731] It took 1.51e-05 seconds to finish nodeSyncInternal
W0903 04:54:52.729967       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-m0dmyx-md-0-4f2wx" does not exist
I0903 04:54:52.730055       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcd5672b83b43a, ext:115636189146, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:52.730081       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-m0dmyx-md-0-4f2wx], creating 1
I0903 04:54:52.730430       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcd5672b804d83, ext:115635966243, loc:(*time.Location)(0x731ea80)}}
I0903 04:54:52.730455       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-m0dmyx-md-0-4f2wx], creating 1
I0903 04:54:52.739728       1 controller_utils.go:591] Controller calico-node created pod calico-node-7rwnn
I0903 04:54:52.739760       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
... skipping 496 lines ...
I0903 04:55:22.806799       1 controller.go:748] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0903 04:55:22.806930       1 controller.go:731] It took 0.00073931 seconds to finish nodeSyncInternal
I0903 04:55:22.807093       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 04:55:22.815399       1 controller_utils.go:221] Made sure that Node capz-m0dmyx-md-0-4f2wx has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0903 04:55:22.815889       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 04:55:24.985420       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:55:25.089792       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-m0dmyx-md-0-pj7kz transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 04:55:01 +0000 UTC,LastTransitionTime:2022-09-03 04:54:51 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 04:55:21 +0000 UTC,LastTransitionTime:2022-09-03 04:55:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 04:55:25.089884       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-pj7kz ReadyCondition updated. Updating timestamp.
I0903 04:55:25.093475       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:55:25.101046       1 node_lifecycle_controller.go:893] Node capz-m0dmyx-md-0-pj7kz is healthy again, removing all taints
I0903 04:55:25.101342       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-m0dmyx-md-0-4f2wx transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-03 04:55:02 +0000 UTC,LastTransitionTime:2022-09-03 04:54:52 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-03 04:55:22 +0000 UTC,LastTransitionTime:2022-09-03 04:55:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0903 04:55:25.102248       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-4f2wx ReadyCondition updated. Updating timestamp.
I0903 04:55:25.103293       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-m0dmyx-md-0-pj7kz}
I0903 04:55:25.103322       1 taint_manager.go:440] "Updating known taints on node" node="capz-m0dmyx-md-0-pj7kz" taints=[]
I0903 04:55:25.103358       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-m0dmyx-md-0-pj7kz"
I0903 04:55:25.103514       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
I0903 04:55:25.118860       1 node_lifecycle_controller.go:893] Node capz-m0dmyx-md-0-4f2wx is healthy again, removing all taints
... skipping 124 lines ...
I0903 04:57:09.050744       1 pv_controller.go:1764] operation "provision-azuredisk-1353/pvc-flmzh[11d2a9d2-4f2c-448f-9fba-d3df229fbbce]" is already running, skipping
I0903 04:57:09.050774       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-flmzh"
I0903 04:57:09.051008       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-flmzh" with version 1227
I0903 04:57:09.053351       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce StorageAccountType:Standard_LRS Size:10
I0903 04:57:09.544509       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0903 04:57:09.568918       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-sxp5l, uid a34deb73-76f8-4539-98cb-ec4196c5a5a9, event type delete
E0903 04:57:09.602900       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-ljgsj" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0903 04:57:09.622776       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8081, name kube-root-ca.crt, uid 7ce31138-1287-40f3-b73c-1428b5847fd8, event type delete
I0903 04:57:09.624829       1 publisher.go:181] Finished syncing namespace "azuredisk-8081" (1.785825ms)
I0903 04:57:09.670391       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0903 04:57:09.670680       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8081, name default, uid 6b2e75ff-2da7-46e1-8b94-3f14b685b9be, event type delete
I0903 04:57:09.670747       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.6µs)
I0903 04:57:09.716439       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.9µs)
... skipping 11 lines ...
I0903 04:57:10.098487       1 pv_controller.go:1753] scheduleOperation[provision-azuredisk-1353/pvc-flmzh[11d2a9d2-4f2c-448f-9fba-d3df229fbbce]]
I0903 04:57:10.098496       1 pv_controller.go:1764] operation "provision-azuredisk-1353/pvc-flmzh[11d2a9d2-4f2c-448f-9fba-d3df229fbbce]" is already running, skipping
I0903 04:57:10.148813       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0903 04:57:10.180532       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid dbbea915-d56d-47bc-a7b7-ba39ab92155e, event type delete
I0903 04:57:10.182867       1 publisher.go:181] Finished syncing namespace "azuredisk-2540" (2.202932ms)
I0903 04:57:10.224847       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-t6gmf, uid 8ba8300f-0e1c-4626-828c-7f8806d1929a, event type delete
E0903 04:57:10.239881       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-5ldwd" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0903 04:57:10.273512       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid 972aa1b6-8117-45a7-b2ac-d4e61c7d476a, event type delete
I0903 04:57:10.274342       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3µs)
I0903 04:57:10.274765       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0903 04:57:10.301842       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0903 04:57:10.301883       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.7µs)
I0903 04:57:10.321917       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (176.785815ms)
... skipping 84 lines ...
I0903 04:57:12.414197       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1353/pvc-flmzh] status: phase Bound already set
I0903 04:57:12.414207       1 pv_controller.go:1038] volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" bound to claim "azuredisk-1353/pvc-flmzh"
I0903 04:57:12.414222       1 pv_controller.go:1039] volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" status after binding: phase: Bound, bound to: "azuredisk-1353/pvc-flmzh (uid: 11d2a9d2-4f2c-448f-9fba-d3df229fbbce)", boundByController: true
I0903 04:57:12.414235       1 pv_controller.go:1040] claim "azuredisk-1353/pvc-flmzh" status after binding: phase: Bound, bound to: "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce", bindCompleted: true, boundByController: true
I0903 04:57:12.506211       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0903 04:57:12.522585       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-qjxsv, uid 60d231e6-ed16-4557-a9ab-54ebb532c710, event type delete
E0903 04:57:12.536051       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-shnkd" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0903 04:57:12.541649       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0903 04:57:12.541728       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid f65d8a1e-cc54-4be8-8e5c-328d041d17f4, event type delete
I0903 04:57:12.541765       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (2µs)
I0903 04:57:12.560817       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid 4e6920fc-9a72-483d-983c-b0ff2ccde131, event type delete
I0903 04:57:12.563260       1 publisher.go:181] Finished syncing namespace "azuredisk-5356" (2.399735ms)
I0903 04:57:12.673209       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5356, estimate: 0, errors: <nil>
... skipping 8 lines ...
I0903 04:57:13.063150       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-knjh5"
I0903 04:57:13.090632       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5194
I0903 04:57:13.106126       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce") from node "capz-m0dmyx-md-0-4f2wx" 
I0903 04:57:13.117335       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5194, name kube-root-ca.crt, uid dcccb925-2a59-4454-88b5-5d4f63630ff0, event type delete
I0903 04:57:13.122567       1 publisher.go:181] Finished syncing namespace "azuredisk-5194" (4.835269ms)
I0903 04:57:13.137779       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-whvlx, uid 2c7a7351-6a8b-46d1-830d-3686868d03d9, event type delete
E0903 04:57:13.153166       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-z4lzf" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0903 04:57:13.180129       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" to node "capz-m0dmyx-md-0-4f2wx".
I0903 04:57:13.196968       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0903 04:57:13.197097       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid fd43be6a-cf78-4897-8f26-e2a4722b9119, event type delete
I0903 04:57:13.197168       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (1.8µs)
I0903 04:57:13.221456       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" lun 0 to node "capz-m0dmyx-md-0-4f2wx".
I0903 04:57:13.221516       1 azure_controller_standard.go:93] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - attach disk(capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) with DiskEncryptionSetID()
... skipping 130 lines ...
I0903 04:57:36.325494       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: claim azuredisk-1353/pvc-flmzh not found
I0903 04:57:36.325502       1 pv_controller.go:1108] reclaimVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: policy is Delete
I0903 04:57:36.325515       1 pv_controller.go:1753] scheduleOperation[delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]]
I0903 04:57:36.325522       1 pv_controller.go:1764] operation "delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]" is already running, skipping
I0903 04:57:36.327888       1 pv_controller.go:1341] isVolumeReleased[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is released
I0903 04:57:36.327904       1 pv_controller.go:1405] doDeleteVolume [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]
I0903 04:57:36.363464       1 pv_controller.go:1260] deletion of volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 04:57:36.363484       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: set phase Failed
I0903 04:57:36.363492       1 pv_controller.go:858] updating PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: set phase Failed
I0903 04:57:36.366774       1 pv_protection_controller.go:205] Got event on PV pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce
I0903 04:57:36.366774       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" with version 1334
I0903 04:57:36.367000       1 pv_controller.go:879] volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" entered phase "Failed"
I0903 04:57:36.367013       1 pv_controller.go:901] volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
E0903 04:57:36.367072       1 goroutinemap.go:150] Operation for "delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]" failed. No retries permitted until 2022-09-03 04:57:36.86704413 +0000 UTC m=+279.773181634 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 04:57:36.366796       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" with version 1334
I0903 04:57:36.367383       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase: Failed, bound to: "azuredisk-1353/pvc-flmzh (uid: 11d2a9d2-4f2c-448f-9fba-d3df229fbbce)", boundByController: true
I0903 04:57:36.367493       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is bound to claim azuredisk-1353/pvc-flmzh
I0903 04:57:36.367629       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: claim azuredisk-1353/pvc-flmzh not found
I0903 04:57:36.367712       1 pv_controller.go:1108] reclaimVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: policy is Delete
I0903 04:57:36.367810       1 pv_controller.go:1753] scheduleOperation[delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]]
I0903 04:57:36.367890       1 pv_controller.go:1766] operation "delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]" postponed due to exponential backoff
I0903 04:57:36.368010       1 event.go:291] "Event occurred" object="pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 04:57:37.779882       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="97.903µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:59002" resp=200
I0903 04:57:39.957221       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:57:39.997736       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:57:40.099858       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:57:40.099978       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" with version 1334
I0903 04:57:40.100050       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase: Failed, bound to: "azuredisk-1353/pvc-flmzh (uid: 11d2a9d2-4f2c-448f-9fba-d3df229fbbce)", boundByController: true
I0903 04:57:40.100123       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is bound to claim azuredisk-1353/pvc-flmzh
I0903 04:57:40.100171       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: claim azuredisk-1353/pvc-flmzh not found
I0903 04:57:40.100191       1 pv_controller.go:1108] reclaimVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: policy is Delete
I0903 04:57:40.100228       1 pv_controller.go:1753] scheduleOperation[delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]]
I0903 04:57:40.100323       1 pv_controller.go:1232] deleteVolumeOperation [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce] started
I0903 04:57:40.108579       1 pv_controller.go:1341] isVolumeReleased[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is released
I0903 04:57:40.108597       1 pv_controller.go:1405] doDeleteVolume [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]
I0903 04:57:40.135551       1 pv_controller.go:1260] deletion of volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 04:57:40.135573       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: set phase Failed
I0903 04:57:40.135583       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase Failed already set
E0903 04:57:40.135616       1 goroutinemap.go:150] Operation for "delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]" failed. No retries permitted until 2022-09-03 04:57:41.135590661 +0000 UTC m=+284.041728165 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 04:57:40.693846       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 04:57:42.886777       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 04:57:42.886813       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce to the node "capz-m0dmyx-md-0-4f2wx" mounted false
I0903 04:57:42.906996       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-m0dmyx-md-0-4f2wx" succeeded. VolumesAttached: []
I0903 04:57:42.907256       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce") on node "capz-m0dmyx-md-0-4f2wx" 
I0903 04:57:42.908990       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
... skipping 8 lines ...
I0903 04:57:50.053559       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 04:57:52.898892       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 04:57:52.898944       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce to the node "capz-m0dmyx-md-0-4f2wx" mounted false
I0903 04:57:54.998303       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:57:55.099991       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:57:55.100061       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" with version 1334
I0903 04:57:55.100103       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase: Failed, bound to: "azuredisk-1353/pvc-flmzh (uid: 11d2a9d2-4f2c-448f-9fba-d3df229fbbce)", boundByController: true
I0903 04:57:55.100179       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is bound to claim azuredisk-1353/pvc-flmzh
I0903 04:57:55.100198       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: claim azuredisk-1353/pvc-flmzh not found
I0903 04:57:55.100263       1 pv_controller.go:1108] reclaimVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: policy is Delete
I0903 04:57:55.100305       1 pv_controller.go:1753] scheduleOperation[delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]]
I0903 04:57:55.100415       1 pv_controller.go:1232] deleteVolumeOperation [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce] started
I0903 04:57:55.106887       1 pv_controller.go:1341] isVolumeReleased[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is released
I0903 04:57:55.106904       1 pv_controller.go:1405] doDeleteVolume [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]
I0903 04:57:55.107076       1 pv_controller.go:1260] deletion of volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) since it's in attaching or detaching state
I0903 04:57:55.107091       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: set phase Failed
I0903 04:57:55.107100       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase Failed already set
E0903 04:57:55.107177       1 goroutinemap.go:150] Operation for "delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]" failed. No retries permitted until 2022-09-03 04:57:57.107108215 +0000 UTC m=+300.013245719 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) since it's in attaching or detaching state"
I0903 04:57:55.142381       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-4f2wx ReadyCondition updated. Updating timestamp.
I0903 04:57:57.779269       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="87.702µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:39112" resp=200
I0903 04:57:58.522692       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) returned with <nil>
I0903 04:57:58.522735       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce) succeeded
I0903 04:57:58.522746       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce was detached from node:capz-m0dmyx-md-0-4f2wx
I0903 04:57:58.522959       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce") on node "capz-m0dmyx-md-0-4f2wx" 
... skipping 8 lines ...
I0903 04:58:09.999776       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:58:10.054173       1 gc_controller.go:161] GC'ing orphaned
I0903 04:58:10.054205       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 04:58:10.089261       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:58:10.100529       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:58:10.100589       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" with version 1334
I0903 04:58:10.100642       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase: Failed, bound to: "azuredisk-1353/pvc-flmzh (uid: 11d2a9d2-4f2c-448f-9fba-d3df229fbbce)", boundByController: true
I0903 04:58:10.100681       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is bound to claim azuredisk-1353/pvc-flmzh
I0903 04:58:10.100706       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: claim azuredisk-1353/pvc-flmzh not found
I0903 04:58:10.100718       1 pv_controller.go:1108] reclaimVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: policy is Delete
I0903 04:58:10.100740       1 pv_controller.go:1753] scheduleOperation[delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]]
I0903 04:58:10.100768       1 pv_controller.go:1232] deleteVolumeOperation [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce] started
I0903 04:58:10.113656       1 pv_controller.go:1341] isVolumeReleased[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is released
... skipping 4 lines ...
I0903 04:58:15.363081       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce
I0903 04:58:15.363123       1 pv_controller.go:1436] volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" deleted
I0903 04:58:15.363138       1 pv_controller.go:1284] deleteVolumeOperation [pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: success
I0903 04:58:15.370927       1 pv_protection_controller.go:205] Got event on PV pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce
I0903 04:58:15.371061       1 pv_protection_controller.go:125] Processing PV pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce
I0903 04:58:15.371440       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce" with version 1392
I0903 04:58:15.371571       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: phase: Failed, bound to: "azuredisk-1353/pvc-flmzh (uid: 11d2a9d2-4f2c-448f-9fba-d3df229fbbce)", boundByController: true
I0903 04:58:15.371647       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: volume is bound to claim azuredisk-1353/pvc-flmzh
I0903 04:58:15.371707       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: claim azuredisk-1353/pvc-flmzh not found
I0903 04:58:15.371741       1 pv_controller.go:1108] reclaimVolume[pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce]: policy is Delete
I0903 04:58:15.371787       1 pv_controller.go:1753] scheduleOperation[delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]]
I0903 04:58:15.371836       1 pv_controller.go:1764] operation "delete-pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce[3e2f95f9-f56a-4a5c-bf33-eb41a7973e7f]" is already running, skipping
I0903 04:58:15.385235       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-11d2a9d2-4f2c-448f-9fba-d3df229fbbce
... skipping 49 lines ...
I0903 04:58:22.235179       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-knjh5.171140dc77d25c78, uid 39187e9b-120a-42ee-a1bf-4e2e0ab58e08, event type delete
I0903 04:58:22.238506       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-knjh5.171140dc7baf0cf2, uid f4afe107-32da-4630-a9eb-fd0ea8cdba8d, event type delete
I0903 04:58:22.241810       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-knjh5.171140dc82497bb5, uid 1e352bf2-a375-4cf9-99e8-55eaa6e698d4, event type delete
I0903 04:58:22.246657       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-flmzh.171140d6d8a59f03, uid c696dc6b-8ddb-4ddd-84ce-f9467e6bcf93, event type delete
I0903 04:58:22.250698       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-flmzh.171140d7a43e010f, uid e08ec96f-9358-45aa-b502-df11cf5d6ec7, event type delete
I0903 04:58:22.272224       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-j6z4g, uid c2e24f1b-2d65-4087-a7b9-e56421b486ea, event type delete
E0903 04:58:22.284864       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-dvtxx" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0903 04:58:22.307046       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1353, name kube-root-ca.crt, uid b3000579-34a5-4ac7-98db-085064e909cc, event type delete
I0903 04:58:22.311061       1 publisher.go:181] Finished syncing namespace "azuredisk-1353" (3.823567ms)
I0903 04:58:22.318434       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1353/default), service account deleted, removing tokens
I0903 04:58:22.318634       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1353, name default, uid 3f60c200-8ecf-4cdd-8442-2fd86ae45cfa, event type delete
I0903 04:58:22.318696       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (2.4µs)
I0903 04:58:22.340620       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1353, estimate: 0, errors: <nil>
... skipping 78 lines ...
I0903 04:58:23.375392       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" lun 0 to node "capz-m0dmyx-md-0-4f2wx".
I0903 04:58:23.375438       1 azure_controller_standard.go:93] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - attach disk(capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) with DiskEncryptionSetID()
I0903 04:58:24.198892       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-156
I0903 04:58:24.223501       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-156, name kube-root-ca.crt, uid b49e271c-ea59-4dc3-b1b0-dcfe9cf84f54, event type delete
I0903 04:58:24.226356       1 publisher.go:181] Finished syncing namespace "azuredisk-156" (2.781848ms)
I0903 04:58:24.295509       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-156, name default-token-mrnzp, uid 8ded03bd-bf95-4bdb-8c67-d8dffc291962, event type delete
E0903 04:58:24.307772       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-156/default: secrets "default-token-mf8bd" is forbidden: unable to create new content in namespace azuredisk-156 because it is being terminated
I0903 04:58:24.321040       1 tokens_controller.go:252] syncServiceAccount(azuredisk-156/default), service account deleted, removing tokens
I0903 04:58:24.321190       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-156, name default, uid a9759c72-b2fa-456f-ac63-10e844eba4cc, event type delete
I0903 04:58:24.321210       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-156" (1.9µs)
I0903 04:58:24.345714       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-156" (2.6µs)
I0903 04:58:24.347227       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-156, estimate: 0, errors: <nil>
I0903 04:58:24.355731       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-156" (161.281512ms)
... skipping 158 lines ...
I0903 04:58:52.928638       1 pv_controller.go:1108] reclaimVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: policy is Delete
I0903 04:58:52.928649       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]]
I0903 04:58:52.928656       1 pv_controller.go:1764] operation "delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]" is already running, skipping
I0903 04:58:52.928685       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753] started
I0903 04:58:52.933157       1 pv_controller.go:1341] isVolumeReleased[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is released
I0903 04:58:52.933178       1 pv_controller.go:1405] doDeleteVolume [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]
I0903 04:58:52.972257       1 pv_controller.go:1260] deletion of volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 04:58:52.972278       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: set phase Failed
I0903 04:58:52.972287       1 pv_controller.go:858] updating PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: set phase Failed
I0903 04:58:52.980539       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" with version 1519
I0903 04:58:52.980570       1 pv_controller.go:879] volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" entered phase "Failed"
I0903 04:58:52.980580       1 pv_controller.go:901] volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
E0903 04:58:52.980627       1 goroutinemap.go:150] Operation for "delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]" failed. No retries permitted until 2022-09-03 04:58:53.480601164 +0000 UTC m=+356.386738568 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 04:58:52.980836       1 event.go:291] "Event occurred" object="pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 04:58:52.980960       1 pv_protection_controller.go:205] Got event on PV pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753
I0903 04:58:52.980990       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" with version 1519
I0903 04:58:52.981013       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase: Failed, bound to: "azuredisk-1563/pvc-qcnl5 (uid: d96939e1-2e92-4a69-ba04-f8b0b49ad753)", boundByController: true
I0903 04:58:52.981042       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is bound to claim azuredisk-1563/pvc-qcnl5
I0903 04:58:52.981061       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: claim azuredisk-1563/pvc-qcnl5 not found
I0903 04:58:52.981073       1 pv_controller.go:1108] reclaimVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: policy is Delete
I0903 04:58:52.981088       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]]
I0903 04:58:52.981099       1 pv_controller.go:1766] operation "delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]" postponed due to exponential backoff
I0903 04:58:55.001318       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:58:55.102482       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:58:55.102555       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" with version 1519
I0903 04:58:55.102595       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase: Failed, bound to: "azuredisk-1563/pvc-qcnl5 (uid: d96939e1-2e92-4a69-ba04-f8b0b49ad753)", boundByController: true
I0903 04:58:55.102700       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is bound to claim azuredisk-1563/pvc-qcnl5
I0903 04:58:55.102803       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: claim azuredisk-1563/pvc-qcnl5 not found
I0903 04:58:55.102911       1 pv_controller.go:1108] reclaimVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: policy is Delete
I0903 04:58:55.103003       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]]
I0903 04:58:55.103103       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753] started
I0903 04:58:55.108673       1 pv_controller.go:1341] isVolumeReleased[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is released
I0903 04:58:55.108695       1 pv_controller.go:1405] doDeleteVolume [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]
I0903 04:58:55.136467       1 pv_controller.go:1260] deletion of volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 04:58:55.136491       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: set phase Failed
I0903 04:58:55.136501       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase Failed already set
E0903 04:58:55.136536       1 goroutinemap.go:150] Operation for "delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]" failed. No retries permitted until 2022-09-03 04:58:56.136509865 +0000 UTC m=+359.042647369 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 04:58:57.779667       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="72.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54196" resp=200
I0903 04:58:58.978483       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
I0903 04:59:02.157453       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0903 04:59:02.956473       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 04:59:02.956508       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 to the node "capz-m0dmyx-md-0-4f2wx" mounted false
I0903 04:59:03.017585       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
... skipping 9 lines ...
I0903 04:59:09.961615       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:59:10.001889       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:59:10.056258       1 gc_controller.go:161] GC'ing orphaned
I0903 04:59:10.056313       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 04:59:10.109734       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:59:10.109793       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" with version 1519
I0903 04:59:10.109835       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase: Failed, bound to: "azuredisk-1563/pvc-qcnl5 (uid: d96939e1-2e92-4a69-ba04-f8b0b49ad753)", boundByController: true
I0903 04:59:10.109893       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is bound to claim azuredisk-1563/pvc-qcnl5
I0903 04:59:10.109914       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: claim azuredisk-1563/pvc-qcnl5 not found
I0903 04:59:10.109946       1 pv_controller.go:1108] reclaimVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: policy is Delete
I0903 04:59:10.109967       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]]
I0903 04:59:10.109999       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753] started
I0903 04:59:10.123244       1 pv_controller.go:1341] isVolumeReleased[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is released
I0903 04:59:10.123262       1 pv_controller.go:1405] doDeleteVolume [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]
I0903 04:59:10.123295       1 pv_controller.go:1260] deletion of volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) since it's in attaching or detaching state
I0903 04:59:10.123308       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: set phase Failed
I0903 04:59:10.123324       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase Failed already set
E0903 04:59:10.123348       1 goroutinemap.go:150] Operation for "delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]" failed. No retries permitted until 2022-09-03 04:59:12.123331237 +0000 UTC m=+375.029468741 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753) since it's in attaching or detaching state"
I0903 04:59:10.761922       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 04:59:11.967126       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-control-plane-csq9p"
I0903 04:59:11.999637       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 04:59:15.156431       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-control-plane-csq9p ReadyCondition updated. Updating timestamp.
I0903 04:59:15.388687       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0903 04:59:17.779433       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="79.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:50372" resp=200
... skipping 2 lines ...
I0903 04:59:18.652207       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753 was detached from node:capz-m0dmyx-md-0-4f2wx
I0903 04:59:18.652233       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753") on node "capz-m0dmyx-md-0-4f2wx" 
I0903 04:59:24.441835       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0903 04:59:25.002709       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 04:59:25.110569       1 pv_controller_base.go:528] resyncing PV controller
I0903 04:59:25.110804       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" with version 1519
I0903 04:59:25.110861       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase: Failed, bound to: "azuredisk-1563/pvc-qcnl5 (uid: d96939e1-2e92-4a69-ba04-f8b0b49ad753)", boundByController: true
I0903 04:59:25.110899       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is bound to claim azuredisk-1563/pvc-qcnl5
I0903 04:59:25.110920       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: claim azuredisk-1563/pvc-qcnl5 not found
I0903 04:59:25.110928       1 pv_controller.go:1108] reclaimVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: policy is Delete
I0903 04:59:25.110946       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]]
I0903 04:59:25.111021       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753] started
I0903 04:59:25.116836       1 pv_controller.go:1341] isVolumeReleased[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is released
... skipping 4 lines ...
I0903 04:59:30.362232       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753
I0903 04:59:30.362268       1 pv_controller.go:1436] volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" deleted
I0903 04:59:30.362281       1 pv_controller.go:1284] deleteVolumeOperation [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: success
I0903 04:59:30.374373       1 pv_protection_controller.go:205] Got event on PV pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753
I0903 04:59:30.374401       1 pv_protection_controller.go:125] Processing PV pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753
I0903 04:59:30.374401       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" with version 1577
I0903 04:59:30.374430       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: phase: Failed, bound to: "azuredisk-1563/pvc-qcnl5 (uid: d96939e1-2e92-4a69-ba04-f8b0b49ad753)", boundByController: true
I0903 04:59:30.374451       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: volume is bound to claim azuredisk-1563/pvc-qcnl5
I0903 04:59:30.374467       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: claim azuredisk-1563/pvc-qcnl5 not found
I0903 04:59:30.374474       1 pv_controller.go:1108] reclaimVolume[pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753]: policy is Delete
I0903 04:59:30.374488       1 pv_controller.go:1753] scheduleOperation[delete-pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753[8876b1c5-690b-4cc2-be56-fd214b37676a]]
I0903 04:59:30.374508       1 pv_controller.go:1232] deleteVolumeOperation [pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753] started
I0903 04:59:30.378477       1 pv_controller.go:1244] Volume "pvc-d96939e1-2e92-4a69-ba04-f8b0b49ad753" is already being deleted
... skipping 113 lines ...
I0903 04:59:38.791550       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-2qldr.171140ee5996af13, uid ce19d363-5e42-40d7-9a6d-ecc5a2f9e320, event type delete
I0903 04:59:38.795537       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-2qldr.171140efaac798b9, uid 69541fe3-c133-445f-a934-2e49de994256, event type delete
I0903 04:59:38.803340       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name pvc-qcnl5.171140e76befccc6, uid 208f90b6-f97d-4d52-90c9-08e51eed15c7, event type delete
I0903 04:59:38.807548       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name pvc-qcnl5.171140e8060f5515, uid d97c1c35-5764-4b63-8733-7358931ff748, event type delete
I0903 04:59:38.815874       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1563, name default-token-28bgx, uid 3f25b2dc-748c-4155-ab60-a6d06ce74b41, event type delete
I0903 04:59:38.825869       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1563, name kube-root-ca.crt, uid c8fd56af-abac-45b0-ad99-11e6fe4e59c3, event type delete
E0903 04:59:38.828300       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1563/default: secrets "default-token-qmjsr" is forbidden: unable to create new content in namespace azuredisk-1563 because it is being terminated
I0903 04:59:38.829641       1 publisher.go:181] Finished syncing namespace "azuredisk-1563" (3.675468ms)
I0903 04:59:38.853745       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1563/default), service account deleted, removing tokens
I0903 04:59:38.853785       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1563, name default, uid df91b6fa-66f9-473d-8ffd-c319ba430a21, event type delete
I0903 04:59:38.853812       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1563" (1.3µs)
I0903 04:59:38.868835       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1563" (1.8µs)
I0903 04:59:38.870440       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1563, estimate: 0, errors: <nil>
... skipping 154 lines ...
I0903 04:59:57.884175       1 pv_controller.go:1108] reclaimVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: policy is Delete
I0903 04:59:57.884188       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]]
I0903 04:59:57.884196       1 pv_controller.go:1764] operation "delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]" is already running, skipping
I0903 04:59:57.884274       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e881165f-aed6-4043-89c6-f4d286053e70] started
I0903 04:59:57.885695       1 pv_controller.go:1341] isVolumeReleased[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is released
I0903 04:59:57.885717       1 pv_controller.go:1405] doDeleteVolume [pvc-e881165f-aed6-4043-89c6-f4d286053e70]
I0903 04:59:57.922178       1 pv_controller.go:1260] deletion of volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 04:59:57.922387       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: set phase Failed
I0903 04:59:57.922514       1 pv_controller.go:858] updating PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: set phase Failed
I0903 04:59:57.928354       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" with version 1674
I0903 04:59:57.928553       1 pv_controller.go:879] volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" entered phase "Failed"
I0903 04:59:57.928728       1 pv_controller.go:901] volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
E0903 04:59:57.928936       1 goroutinemap.go:150] Operation for "delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]" failed. No retries permitted until 2022-09-03 04:59:58.428907464 +0000 UTC m=+421.335044968 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 04:59:57.929347       1 event.go:291] "Event occurred" object="pvc-e881165f-aed6-4043-89c6-f4d286053e70" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 04:59:57.929630       1 pv_protection_controller.go:205] Got event on PV pvc-e881165f-aed6-4043-89c6-f4d286053e70
I0903 04:59:57.929662       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" with version 1674
I0903 04:59:57.929830       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: phase: Failed, bound to: "azuredisk-7463/pvc-t5rhc (uid: e881165f-aed6-4043-89c6-f4d286053e70)", boundByController: true
I0903 04:59:57.929865       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is bound to claim azuredisk-7463/pvc-t5rhc
I0903 04:59:57.929887       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: claim azuredisk-7463/pvc-t5rhc not found
I0903 04:59:57.929954       1 pv_controller.go:1108] reclaimVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: policy is Delete
I0903 04:59:57.930034       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]]
I0903 04:59:57.930049       1 pv_controller.go:1766] operation "delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]" postponed due to exponential backoff
I0903 04:59:57.958830       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
... skipping 14 lines ...
I0903 05:00:09.963801       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:00:10.006618       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:00:10.060997       1 gc_controller.go:161] GC'ing orphaned
I0903 05:00:10.061030       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:00:10.112381       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:00:10.112578       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" with version 1674
I0903 05:00:10.112646       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: phase: Failed, bound to: "azuredisk-7463/pvc-t5rhc (uid: e881165f-aed6-4043-89c6-f4d286053e70)", boundByController: true
I0903 05:00:10.112688       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is bound to claim azuredisk-7463/pvc-t5rhc
I0903 05:00:10.112708       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: claim azuredisk-7463/pvc-t5rhc not found
I0903 05:00:10.112717       1 pv_controller.go:1108] reclaimVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: policy is Delete
I0903 05:00:10.112738       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]]
I0903 05:00:10.112782       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e881165f-aed6-4043-89c6-f4d286053e70] started
I0903 05:00:10.124625       1 pv_controller.go:1341] isVolumeReleased[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is released
I0903 05:00:10.124645       1 pv_controller.go:1405] doDeleteVolume [pvc-e881165f-aed6-4043-89c6-f4d286053e70]
I0903 05:00:10.124679       1 pv_controller.go:1260] deletion of volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) since it's in attaching or detaching state
I0903 05:00:10.124690       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: set phase Failed
I0903 05:00:10.124703       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: phase Failed already set
E0903 05:00:10.124736       1 goroutinemap.go:150] Operation for "delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]" failed. No retries permitted until 2022-09-03 05:00:11.124712619 +0000 UTC m=+434.030850123 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) since it's in attaching or detaching state"
I0903 05:00:10.808762       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:00:13.389462       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 05:00:17.706867       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) returned with <nil>
I0903 05:00:17.706903       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70) succeeded
I0903 05:00:17.706913       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70 was detached from node:capz-m0dmyx-md-0-pj7kz
I0903 05:00:17.706939       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70") on node "capz-m0dmyx-md-0-pj7kz" 
... skipping 5 lines ...
I0903 05:00:22.062098       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
I0903 05:00:22.980260       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0903 05:00:23.953629       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0903 05:00:25.007681       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:00:25.112599       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:00:25.112835       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" with version 1674
I0903 05:00:25.112961       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: phase: Failed, bound to: "azuredisk-7463/pvc-t5rhc (uid: e881165f-aed6-4043-89c6-f4d286053e70)", boundByController: true
I0903 05:00:25.113000       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is bound to claim azuredisk-7463/pvc-t5rhc
I0903 05:00:25.113027       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: claim azuredisk-7463/pvc-t5rhc not found
I0903 05:00:25.113036       1 pv_controller.go:1108] reclaimVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: policy is Delete
I0903 05:00:25.113053       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]]
I0903 05:00:25.113093       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e881165f-aed6-4043-89c6-f4d286053e70] started
I0903 05:00:25.119676       1 pv_controller.go:1341] isVolumeReleased[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is released
... skipping 5 lines ...
I0903 05:00:30.349593       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-e881165f-aed6-4043-89c6-f4d286053e70
I0903 05:00:30.349629       1 pv_controller.go:1436] volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" deleted
I0903 05:00:30.349644       1 pv_controller.go:1284] deleteVolumeOperation [pvc-e881165f-aed6-4043-89c6-f4d286053e70]: success
I0903 05:00:30.361320       1 pv_protection_controller.go:205] Got event on PV pvc-e881165f-aed6-4043-89c6-f4d286053e70
I0903 05:00:30.361354       1 pv_protection_controller.go:125] Processing PV pvc-e881165f-aed6-4043-89c6-f4d286053e70
I0903 05:00:30.361721       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" with version 1725
I0903 05:00:30.361762       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: phase: Failed, bound to: "azuredisk-7463/pvc-t5rhc (uid: e881165f-aed6-4043-89c6-f4d286053e70)", boundByController: true
I0903 05:00:30.361788       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: volume is bound to claim azuredisk-7463/pvc-t5rhc
I0903 05:00:30.361860       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: claim azuredisk-7463/pvc-t5rhc not found
I0903 05:00:30.362057       1 pv_controller.go:1108] reclaimVolume[pvc-e881165f-aed6-4043-89c6-f4d286053e70]: policy is Delete
I0903 05:00:30.362079       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e881165f-aed6-4043-89c6-f4d286053e70[a29925ff-5f7d-4218-92e1-acfd2b4604b4]]
I0903 05:00:30.362172       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e881165f-aed6-4043-89c6-f4d286053e70] started
I0903 05:00:30.365341       1 pv_controller.go:1244] Volume "pvc-e881165f-aed6-4043-89c6-f4d286053e70" is already being deleted
... skipping 114 lines ...
I0903 05:00:38.780631       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-74ztw.171140fde71c9805, uid 63a97c96-c0ce-4de5-a0ce-8dc7aebfe071, event type delete
I0903 05:00:38.784853       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-74ztw.171140fdefa70d89, uid be1a150a-8c30-4541-b932-088b0c3d2841, event type delete
I0903 05:00:38.788684       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-74ztw.171140fdf6bb23ae, uid eb2e2ee0-58f3-4a27-9c33-3257832c60ee, event type delete
I0903 05:00:38.791596       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name pvc-t5rhc.171140f8c5aed318, uid fd73f30e-4b0e-414e-b3f6-68cf5db1e524, event type delete
I0903 05:00:38.795865       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name pvc-t5rhc.171140f95eed57d0, uid 3e142be3-686b-41e1-9880-462a1bff796e, event type delete
I0903 05:00:38.814158       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7463, name default-token-52sz8, uid 363589a2-6295-46dc-88ae-62017dfcf7c1, event type delete
E0903 05:00:38.830776       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7463/default: secrets "default-token-9wk8s" is forbidden: unable to create new content in namespace azuredisk-7463 because it is being terminated
I0903 05:00:38.862127       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7463, name kube-root-ca.crt, uid f3ae67de-9336-42fe-8b50-5c65b0c204d4, event type delete
I0903 05:00:38.864960       1 publisher.go:181] Finished syncing namespace "azuredisk-7463" (2.661448ms)
I0903 05:00:38.894177       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7463/default), service account deleted, removing tokens
I0903 05:00:38.894569       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7463, name default, uid 57fe90e6-718b-4120-8eff-d6bc4e1e78a7, event type delete
I0903 05:00:38.894598       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7463" (2.2µs)
I0903 05:00:38.908693       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7463" (2.1µs)
... skipping 146 lines ...
I0903 05:00:57.986677       1 pv_controller.go:1108] reclaimVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: policy is Delete
I0903 05:00:57.986720       1 pv_controller.go:1753] scheduleOperation[delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]]
I0903 05:00:57.986813       1 pv_controller.go:1764] operation "delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]" is already running, skipping
I0903 05:00:57.986431       1 pv_controller.go:1232] deleteVolumeOperation [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c] started
I0903 05:00:57.989873       1 pv_controller.go:1341] isVolumeReleased[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is released
I0903 05:00:57.989889       1 pv_controller.go:1405] doDeleteVolume [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]
I0903 05:00:58.026358       1 pv_controller.go:1260] deletion of volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:00:58.026380       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: set phase Failed
I0903 05:00:58.026387       1 pv_controller.go:858] updating PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: set phase Failed
I0903 05:00:58.032275       1 pv_protection_controller.go:205] Got event on PV pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c
I0903 05:00:58.032703       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" with version 1820
I0903 05:00:58.033034       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: phase: Failed, bound to: "azuredisk-9241/pvc-5tvxc (uid: 90f39350-dddb-4643-be8d-8ffbde3b8f4c)", boundByController: true
I0903 05:00:58.033213       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is bound to claim azuredisk-9241/pvc-5tvxc
I0903 05:00:58.033392       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: claim azuredisk-9241/pvc-5tvxc not found
I0903 05:00:58.033534       1 pv_controller.go:1108] reclaimVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: policy is Delete
I0903 05:00:58.032792       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" with version 1820
I0903 05:00:58.033803       1 pv_controller.go:879] volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" entered phase "Failed"
I0903 05:00:58.033931       1 pv_controller.go:901] volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:00:58.033655       1 pv_controller.go:1753] scheduleOperation[delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]]
E0903 05:00:58.034165       1 goroutinemap.go:150] Operation for "delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]" failed. No retries permitted until 2022-09-03 05:00:58.534068711 +0000 UTC m=+481.440206215 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:00:58.034307       1 pv_controller.go:1766] operation "delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]" postponed due to exponential backoff
I0903 05:00:58.034471       1 event.go:291] "Event occurred" object="pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:01:02.095052       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
I0903 05:01:02.095265       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c to the node "capz-m0dmyx-md-0-pj7kz" mounted false
I0903 05:01:02.171119       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
I0903 05:01:02.172040       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c to the node "capz-m0dmyx-md-0-pj7kz" mounted false
... skipping 11 lines ...
I0903 05:01:09.965478       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:01:10.014810       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:01:10.062694       1 gc_controller.go:161] GC'ing orphaned
I0903 05:01:10.062745       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:01:10.114485       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:01:10.114619       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" with version 1820
I0903 05:01:10.114681       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: phase: Failed, bound to: "azuredisk-9241/pvc-5tvxc (uid: 90f39350-dddb-4643-be8d-8ffbde3b8f4c)", boundByController: true
I0903 05:01:10.114719       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is bound to claim azuredisk-9241/pvc-5tvxc
I0903 05:01:10.114736       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: claim azuredisk-9241/pvc-5tvxc not found
I0903 05:01:10.114745       1 pv_controller.go:1108] reclaimVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: policy is Delete
I0903 05:01:10.114764       1 pv_controller.go:1753] scheduleOperation[delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]]
I0903 05:01:10.114802       1 pv_controller.go:1232] deleteVolumeOperation [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c] started
I0903 05:01:10.126764       1 pv_controller.go:1341] isVolumeReleased[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is released
I0903 05:01:10.126785       1 pv_controller.go:1405] doDeleteVolume [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]
I0903 05:01:10.126820       1 pv_controller.go:1260] deletion of volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) since it's in attaching or detaching state
I0903 05:01:10.126836       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: set phase Failed
I0903 05:01:10.126846       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: phase Failed already set
E0903 05:01:10.126877       1 goroutinemap.go:150] Operation for "delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]" failed. No retries permitted until 2022-09-03 05:01:11.126854668 +0000 UTC m=+494.032992172 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) since it's in attaching or detaching state"
I0903 05:01:10.851784       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:01:10.956183       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0903 05:01:14.544186       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 24 items received
I0903 05:01:16.588967       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0903 05:01:17.779389       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="58.9µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:57760" resp=200
I0903 05:01:17.781007       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c) returned with <nil>
... skipping 2 lines ...
I0903 05:01:17.781185       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c") on node "capz-m0dmyx-md-0-pj7kz" 
I0903 05:01:21.022359       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 28 items received
I0903 05:01:24.962706       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 22 items received
I0903 05:01:25.015166       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:01:25.115663       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:01:25.115872       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" with version 1820
I0903 05:01:25.115929       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: phase: Failed, bound to: "azuredisk-9241/pvc-5tvxc (uid: 90f39350-dddb-4643-be8d-8ffbde3b8f4c)", boundByController: true
I0903 05:01:25.115969       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is bound to claim azuredisk-9241/pvc-5tvxc
I0903 05:01:25.115991       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: claim azuredisk-9241/pvc-5tvxc not found
I0903 05:01:25.116006       1 pv_controller.go:1108] reclaimVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: policy is Delete
I0903 05:01:25.116023       1 pv_controller.go:1753] scheduleOperation[delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]]
I0903 05:01:25.116061       1 pv_controller.go:1232] deleteVolumeOperation [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c] started
I0903 05:01:25.122346       1 pv_controller.go:1341] isVolumeReleased[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is released
... skipping 9 lines ...
I0903 05:01:30.373615       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c
I0903 05:01:30.373651       1 pv_controller.go:1436] volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" deleted
I0903 05:01:30.373664       1 pv_controller.go:1284] deleteVolumeOperation [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: success
I0903 05:01:30.384939       1 pv_protection_controller.go:205] Got event on PV pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c
I0903 05:01:30.384967       1 pv_protection_controller.go:125] Processing PV pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c
I0903 05:01:30.385334       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" with version 1870
I0903 05:01:30.385424       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: phase: Failed, bound to: "azuredisk-9241/pvc-5tvxc (uid: 90f39350-dddb-4643-be8d-8ffbde3b8f4c)", boundByController: true
I0903 05:01:30.385490       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: volume is bound to claim azuredisk-9241/pvc-5tvxc
I0903 05:01:30.385537       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: claim azuredisk-9241/pvc-5tvxc not found
I0903 05:01:30.385565       1 pv_controller.go:1108] reclaimVolume[pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c]: policy is Delete
I0903 05:01:30.385612       1 pv_controller.go:1753] scheduleOperation[delete-pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c[e7214b90-2875-4363-902c-1e117a41a0ca]]
I0903 05:01:30.385712       1 pv_controller.go:1232] deleteVolumeOperation [pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c] started
I0903 05:01:30.389173       1 pv_controller.go:1244] Volume "pvc-90f39350-dddb-4643-be8d-8ffbde3b8f4c" is already being deleted
... skipping 114 lines ...
I0903 05:01:38.788178       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-5cpv5.1711410bbb74b8f6, uid c0806d39-50b1-4086-a67c-46799fc262da, event type delete
I0903 05:01:38.791592       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-5cpv5.1711410bbe24989b, uid 2a949a0b-96d7-4240-b1f2-5515cf83fd0b, event type delete
I0903 05:01:38.794672       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-5cpv5.1711410bc4530177, uid b97f779a-8ba9-47fe-9cb3-e0ff9e5665fe, event type delete
I0903 05:01:38.799305       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-5tvxc.17114106bf57ffa9, uid 43fc8e67-d731-47f2-a3da-9382fb0dbfa0, event type delete
I0903 05:01:38.803403       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-5tvxc.17114107584f1b3b, uid 59329520-fb5d-4a76-82d0-2f3cff3e762b, event type delete
I0903 05:01:38.840822       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9241, name default-token-jszpw, uid b8e11eba-637b-46ff-a62a-6eec665cd66a, event type delete
E0903 05:01:38.866145       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9241/default: secrets "default-token-9xr9n" is forbidden: unable to create new content in namespace azuredisk-9241 because it is being terminated
I0903 05:01:38.867780       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9241, name default, uid 5cd93f35-c112-4438-9dea-408df4bd799f, event type delete
I0903 05:01:38.867802       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9241" (1.9µs)
I0903 05:01:38.869462       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9241/default), service account deleted, removing tokens
I0903 05:01:38.878096       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9241, name kube-root-ca.crt, uid 70611be9-7765-48df-b968-b6f6b49aaf23, event type delete
I0903 05:01:38.880432       1 publisher.go:181] Finished syncing namespace "azuredisk-9241" (1.98103ms)
I0903 05:01:38.925509       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9241, estimate: 0, errors: <nil>
... skipping 679 lines ...
I0903 05:03:06.781978       1 pv_controller.go:1108] reclaimVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: policy is Delete
I0903 05:03:06.782016       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]]
I0903 05:03:06.782035       1 pv_controller.go:1764] operation "delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]" is already running, skipping
I0903 05:03:06.782061       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041] started
I0903 05:03:06.785184       1 pv_controller.go:1341] isVolumeReleased[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is released
I0903 05:03:06.785201       1 pv_controller.go:1405] doDeleteVolume [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]
I0903 05:03:06.827648       1 pv_controller.go:1260] deletion of volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:03:06.827673       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: set phase Failed
I0903 05:03:06.827683       1 pv_controller.go:858] updating PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: set phase Failed
I0903 05:03:06.832518       1 pv_protection_controller.go:205] Got event on PV pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041
I0903 05:03:06.832763       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" with version 2111
I0903 05:03:06.832915       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase: Failed, bound to: "azuredisk-9336/pvc-9jllz (uid: 5f9299a6-d82d-423c-bbff-34ce4aadd041)", boundByController: true
I0903 05:03:06.833029       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is bound to claim azuredisk-9336/pvc-9jllz
I0903 05:03:06.833121       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: claim azuredisk-9336/pvc-9jllz not found
I0903 05:03:06.833194       1 pv_controller.go:1108] reclaimVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: policy is Delete
I0903 05:03:06.833333       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]]
I0903 05:03:06.833465       1 pv_controller.go:1764] operation "delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]" is already running, skipping
I0903 05:03:06.833789       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" with version 2111
I0903 05:03:06.833814       1 pv_controller.go:879] volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" entered phase "Failed"
I0903 05:03:06.833842       1 pv_controller.go:901] volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
E0903 05:03:06.833897       1 goroutinemap.go:150] Operation for "delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]" failed. No retries permitted until 2022-09-03 05:03:07.333868929 +0000 UTC m=+610.240006333 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:03:06.834070       1 event.go:291] "Event occurred" object="pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:03:07.778790       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="157.902µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43500" resp=200
I0903 05:03:09.968659       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:03:09.976830       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:03:09.999928       1 controller.go:272] Triggering nodeSync
I0903 05:03:09.999954       1 controller.go:291] nodeSync has been triggered
... skipping 27 lines ...
I0903 05:03:10.120098       1 pv_controller.go:861] updating PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: phase Bound already set
I0903 05:03:10.120097       1 pv_controller.go:922] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: already bound to "azuredisk-9336/pvc-znzmh"
I0903 05:03:10.120105       1 pv_controller.go:858] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: set phase Bound
I0903 05:03:10.120107       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" with version 2111
I0903 05:03:10.120113       1 pv_controller.go:861] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase Bound already set
I0903 05:03:10.120121       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh]: binding to "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1"
I0903 05:03:10.120126       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase: Failed, bound to: "azuredisk-9336/pvc-9jllz (uid: 5f9299a6-d82d-423c-bbff-34ce4aadd041)", boundByController: true
I0903 05:03:10.120140       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh]: already bound to "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1"
I0903 05:03:10.120147       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is bound to claim azuredisk-9336/pvc-9jllz
I0903 05:03:10.120149       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh] status: set phase Bound
I0903 05:03:10.120162       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: claim azuredisk-9336/pvc-9jllz not found
I0903 05:03:10.120169       1 pv_controller.go:1108] reclaimVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: policy is Delete
I0903 05:03:10.120170       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh] status: phase Bound already set
... skipping 17 lines ...
I0903 05:03:10.120332       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-bvk6q] status: phase Bound already set
I0903 05:03:10.120343       1 pv_controller.go:1038] volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" bound to claim "azuredisk-9336/pvc-bvk6q"
I0903 05:03:10.120356       1 pv_controller.go:1039] volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-bvk6q (uid: c0d2c20d-ca25-4739-aa78-ed7e1a194817)", boundByController: true
I0903 05:03:10.120367       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-bvk6q" status after binding: phase: Bound, bound to: "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817", bindCompleted: true, boundByController: true
I0903 05:03:10.128422       1 pv_controller.go:1341] isVolumeReleased[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is released
I0903 05:03:10.128440       1 pv_controller.go:1405] doDeleteVolume [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]
I0903 05:03:10.154678       1 pv_controller.go:1260] deletion of volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:03:10.154700       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: set phase Failed
I0903 05:03:10.154711       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase Failed already set
E0903 05:03:10.154751       1 goroutinemap.go:150] Operation for "delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]" failed. No retries permitted until 2022-09-03 05:03:11.154720043 +0000 UTC m=+614.060857547 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:03:10.274229       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0903 05:03:10.958435       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:03:11.000237       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-03 05:03:11.000119836 +0000 UTC m=+613.906257340"
I0903 05:03:11.000237       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-03 05:03:11.000116635 +0000 UTC m=+613.906254339"
I0903 05:03:11.001539       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="1.403019ms"
I0903 05:03:11.001588       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.456819ms"
... skipping 59 lines ...
I0903 05:03:25.121528       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: volume is bound to claim azuredisk-9336/pvc-bvk6q
I0903 05:03:25.121546       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: claim azuredisk-9336/pvc-bvk6q found: phase: Bound, bound to: "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817", bindCompleted: true, boundByController: true
I0903 05:03:25.121565       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: all is bound
I0903 05:03:25.121572       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: set phase Bound
I0903 05:03:25.121582       1 pv_controller.go:861] updating PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: phase Bound already set
I0903 05:03:25.121597       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" with version 2111
I0903 05:03:25.121618       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase: Failed, bound to: "azuredisk-9336/pvc-9jllz (uid: 5f9299a6-d82d-423c-bbff-34ce4aadd041)", boundByController: true
I0903 05:03:25.121640       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is bound to claim azuredisk-9336/pvc-9jllz
I0903 05:03:25.121664       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: claim azuredisk-9336/pvc-9jllz not found
I0903 05:03:25.121699       1 pv_controller.go:1108] reclaimVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: policy is Delete
I0903 05:03:25.121716       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]]
I0903 05:03:25.121767       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041] started
I0903 05:03:25.126347       1 pv_controller.go:1341] isVolumeReleased[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is released
I0903 05:03:25.126366       1 pv_controller.go:1405] doDeleteVolume [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]
I0903 05:03:25.126400       1 pv_controller.go:1260] deletion of volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) since it's in attaching or detaching state
I0903 05:03:25.126411       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: set phase Failed
I0903 05:03:25.126426       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase Failed already set
E0903 05:03:25.126460       1 goroutinemap.go:150] Operation for "delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]" failed. No retries permitted until 2022-09-03 05:03:27.126435744 +0000 UTC m=+630.032573248 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) since it's in attaching or detaching state"
I0903 05:03:25.877930       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 05:03:27.779443       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="66.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53536" resp=200
I0903 05:03:28.134645       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) returned with <nil>
I0903 05:03:28.134684       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041) succeeded
I0903 05:03:28.134694       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041 was detached from node:capz-m0dmyx-md-0-pj7kz
I0903 05:03:28.134899       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041") on node "capz-m0dmyx-md-0-pj7kz" 
... skipping 15 lines ...
I0903 05:03:40.121371       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: volume is bound to claim azuredisk-9336/pvc-bvk6q
I0903 05:03:40.121386       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: claim azuredisk-9336/pvc-bvk6q found: phase: Bound, bound to: "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817", bindCompleted: true, boundByController: true
I0903 05:03:40.121401       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: all is bound
I0903 05:03:40.121409       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: set phase Bound
I0903 05:03:40.121418       1 pv_controller.go:861] updating PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: phase Bound already set
I0903 05:03:40.121430       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" with version 2111
I0903 05:03:40.121450       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase: Failed, bound to: "azuredisk-9336/pvc-9jllz (uid: 5f9299a6-d82d-423c-bbff-34ce4aadd041)", boundByController: true
I0903 05:03:40.121477       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is bound to claim azuredisk-9336/pvc-9jllz
I0903 05:03:40.121497       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: claim azuredisk-9336/pvc-9jllz not found
I0903 05:03:40.121505       1 pv_controller.go:1108] reclaimVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: policy is Delete
I0903 05:03:40.121521       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]]
I0903 05:03:40.121550       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041] started
I0903 05:03:40.121621       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-znzmh" with version 1898
... skipping 34 lines ...
I0903 05:03:45.377377       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041
I0903 05:03:45.377447       1 pv_controller.go:1436] volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" deleted
I0903 05:03:45.377480       1 pv_controller.go:1284] deleteVolumeOperation [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: success
I0903 05:03:45.383741       1 pv_protection_controller.go:205] Got event on PV pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041
I0903 05:03:45.383988       1 pv_protection_controller.go:125] Processing PV pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041
I0903 05:03:45.384041       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" with version 2170
I0903 05:03:45.384787       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: phase: Failed, bound to: "azuredisk-9336/pvc-9jllz (uid: 5f9299a6-d82d-423c-bbff-34ce4aadd041)", boundByController: true
I0903 05:03:45.384956       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: volume is bound to claim azuredisk-9336/pvc-9jllz
I0903 05:03:45.385134       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: claim azuredisk-9336/pvc-9jllz not found
I0903 05:03:45.385293       1 pv_controller.go:1108] reclaimVolume[pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041]: policy is Delete
I0903 05:03:45.385500       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041[28b345d5-4d27-41a7-8f7d-baca479a309a]]
I0903 05:03:45.385654       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041] started
I0903 05:03:45.389713       1 pv_controller_base.go:235] volume "pvc-5f9299a6-d82d-423c-bbff-34ce4aadd041" deleted
... skipping 257 lines ...
I0903 05:04:31.960796       1 pv_controller.go:1108] reclaimVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: policy is Delete
I0903 05:04:31.960811       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]]
I0903 05:04:31.960818       1 pv_controller.go:1764] operation "delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]" is already running, skipping
I0903 05:04:31.960847       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817] started
I0903 05:04:31.962632       1 pv_controller.go:1341] isVolumeReleased[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: volume is released
I0903 05:04:31.962648       1 pv_controller.go:1405] doDeleteVolume [pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]
I0903 05:04:31.962701       1 pv_controller.go:1260] deletion of volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817) since it's in attaching or detaching state
I0903 05:04:31.962717       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: set phase Failed
I0903 05:04:31.962726       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: set phase Failed
I0903 05:04:31.965327       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" with version 2250
I0903 05:04:31.965503       1 pv_controller.go:879] volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" entered phase "Failed"
I0903 05:04:31.965520       1 pv_controller.go:901] volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817) since it's in attaching or detaching state
E0903 05:04:31.965568       1 goroutinemap.go:150] Operation for "delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]" failed. No retries permitted until 2022-09-03 05:04:32.465540679 +0000 UTC m=+695.371678083 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817) since it's in attaching or detaching state"
I0903 05:04:31.965813       1 event.go:291] "Event occurred" object="pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817) since it's in attaching or detaching state"
I0903 05:04:31.966035       1 pv_protection_controller.go:205] Got event on PV pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817
I0903 05:04:31.966169       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" with version 2250
I0903 05:04:31.966279       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: phase: Failed, bound to: "azuredisk-9336/pvc-bvk6q (uid: c0d2c20d-ca25-4739-aa78-ed7e1a194817)", boundByController: true
I0903 05:04:31.966388       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: volume is bound to claim azuredisk-9336/pvc-bvk6q
I0903 05:04:31.966489       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: claim azuredisk-9336/pvc-bvk6q not found
I0903 05:04:31.966588       1 pv_controller.go:1108] reclaimVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: policy is Delete
I0903 05:04:31.966683       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]]
I0903 05:04:31.966791       1 pv_controller.go:1766] operation "delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]" postponed due to exponential backoff
I0903 05:04:37.780182       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="86.801µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:34406" resp=200
... skipping 19 lines ...
I0903 05:04:40.125343       1 pv_controller.go:922] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: already bound to "azuredisk-9336/pvc-znzmh"
I0903 05:04:40.125357       1 pv_controller.go:861] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase Bound already set
I0903 05:04:40.125363       1 pv_controller.go:858] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: set phase Bound
I0903 05:04:40.125386       1 pv_controller.go:861] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase Bound already set
I0903 05:04:40.125388       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" with version 2250
I0903 05:04:40.125409       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh]: binding to "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1"
I0903 05:04:40.125434       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: phase: Failed, bound to: "azuredisk-9336/pvc-bvk6q (uid: c0d2c20d-ca25-4739-aa78-ed7e1a194817)", boundByController: true
I0903 05:04:40.125473       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh]: already bound to "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1"
I0903 05:04:40.125480       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: volume is bound to claim azuredisk-9336/pvc-bvk6q
I0903 05:04:40.125499       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9336/pvc-znzmh] status: set phase Bound
I0903 05:04:40.125533       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: claim azuredisk-9336/pvc-bvk6q not found
I0903 05:04:40.125553       1 pv_controller.go:1108] reclaimVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: policy is Delete
I0903 05:04:40.125620       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]]
... skipping 8 lines ...
I0903 05:04:45.387897       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817
I0903 05:04:45.387933       1 pv_controller.go:1436] volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" deleted
I0903 05:04:45.387944       1 pv_controller.go:1284] deleteVolumeOperation [pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: success
I0903 05:04:45.392436       1 pv_protection_controller.go:205] Got event on PV pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817
I0903 05:04:45.392730       1 pv_protection_controller.go:125] Processing PV pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817
I0903 05:04:45.392733       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" with version 2273
I0903 05:04:45.393380       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: phase: Failed, bound to: "azuredisk-9336/pvc-bvk6q (uid: c0d2c20d-ca25-4739-aa78-ed7e1a194817)", boundByController: true
I0903 05:04:45.393631       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: volume is bound to claim azuredisk-9336/pvc-bvk6q
I0903 05:04:45.393803       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: claim azuredisk-9336/pvc-bvk6q not found
I0903 05:04:45.393940       1 pv_controller.go:1108] reclaimVolume[pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817]: policy is Delete
I0903 05:04:45.394091       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]]
I0903 05:04:45.394198       1 pv_controller.go:1764] operation "delete-pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817[6e0af26a-f87a-4e78-a1a3-3ff4e42e5cf7]" is already running, skipping
I0903 05:04:45.398771       1 pv_controller_base.go:235] volume "pvc-c0d2c20d-ca25-4739-aa78-ed7e1a194817" deleted
... skipping 152 lines ...
I0903 05:05:22.935734       1 pv_controller.go:1108] reclaimVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: policy is Delete
I0903 05:05:22.935763       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]]
I0903 05:05:22.935774       1 pv_controller.go:1764] operation "delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]" is already running, skipping
I0903 05:05:22.935799       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1] started
I0903 05:05:22.937346       1 pv_controller.go:1341] isVolumeReleased[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is released
I0903 05:05:22.937363       1 pv_controller.go:1405] doDeleteVolume [pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]
I0903 05:05:22.963151       1 pv_controller.go:1260] deletion of volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 05:05:22.963380       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: set phase Failed
I0903 05:05:22.963553       1 pv_controller.go:858] updating PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: set phase Failed
I0903 05:05:22.971274       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" with version 2338
I0903 05:05:22.971459       1 pv_controller.go:879] volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" entered phase "Failed"
I0903 05:05:22.971595       1 pv_controller.go:901] volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 05:05:22.971321       1 pv_protection_controller.go:205] Got event on PV pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1
I0903 05:05:22.971339       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" with version 2338
I0903 05:05:22.971803       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase: Failed, bound to: "azuredisk-9336/pvc-znzmh (uid: 1b4dea3d-b8bf-4332-a183-79e93d5a91c1)", boundByController: true
I0903 05:05:22.971860       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is bound to claim azuredisk-9336/pvc-znzmh
I0903 05:05:22.971885       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: claim azuredisk-9336/pvc-znzmh not found
I0903 05:05:22.971897       1 pv_controller.go:1108] reclaimVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: policy is Delete
I0903 05:05:22.971914       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]]
E0903 05:05:22.972032       1 goroutinemap.go:150] Operation for "delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]" failed. No retries permitted until 2022-09-03 05:05:23.471747424 +0000 UTC m=+746.377884828 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:05:22.972153       1 pv_controller.go:1766] operation "delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]" postponed due to exponential backoff
I0903 05:05:22.972068       1 event.go:291] "Event occurred" object="pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:05:23.178216       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 05:05:23.178267       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 to the node "capz-m0dmyx-md-0-4f2wx" mounted false
I0903 05:05:23.215014       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 05:05:23.216172       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 to the node "capz-m0dmyx-md-0-4f2wx" mounted false
... skipping 3 lines ...
I0903 05:05:23.236980       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 from node "capz-m0dmyx-md-0-4f2wx"
I0903 05:05:23.306315       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1"
I0903 05:05:23.306342       1 azure_controller_standard.go:166] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1)
I0903 05:05:25.029494       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:05:25.127494       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:05:25.127589       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" with version 2338
I0903 05:05:25.127669       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase: Failed, bound to: "azuredisk-9336/pvc-znzmh (uid: 1b4dea3d-b8bf-4332-a183-79e93d5a91c1)", boundByController: true
I0903 05:05:25.127752       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is bound to claim azuredisk-9336/pvc-znzmh
I0903 05:05:25.127778       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: claim azuredisk-9336/pvc-znzmh not found
I0903 05:05:25.127820       1 pv_controller.go:1108] reclaimVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: policy is Delete
I0903 05:05:25.127839       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]]
I0903 05:05:25.127913       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1] started
I0903 05:05:25.131135       1 pv_controller.go:1341] isVolumeReleased[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is released
I0903 05:05:25.131157       1 pv_controller.go:1405] doDeleteVolume [pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]
I0903 05:05:25.131213       1 pv_controller.go:1260] deletion of volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1) since it's in attaching or detaching state
I0903 05:05:25.131231       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: set phase Failed
I0903 05:05:25.131245       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase Failed already set
E0903 05:05:25.131300       1 goroutinemap.go:150] Operation for "delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]" failed. No retries permitted until 2022-09-03 05:05:26.131254751 +0000 UTC m=+749.037392255 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1) since it's in attaching or detaching state"
I0903 05:05:25.214278       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-4f2wx ReadyCondition updated. Updating timestamp.
I0903 05:05:27.778824       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="99.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:58360" resp=200
I0903 05:05:30.080924       1 gc_controller.go:161] GC'ing orphaned
I0903 05:05:30.080956       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:05:33.143033       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0903 05:05:37.779001       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:51374" resp=200
... skipping 2 lines ...
I0903 05:05:39.088888       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1 was detached from node:capz-m0dmyx-md-0-4f2wx
I0903 05:05:39.088976       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1") on node "capz-m0dmyx-md-0-4f2wx" 
I0903 05:05:39.972337       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:05:40.029730       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:05:40.128238       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:05:40.128308       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" with version 2338
I0903 05:05:40.128351       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase: Failed, bound to: "azuredisk-9336/pvc-znzmh (uid: 1b4dea3d-b8bf-4332-a183-79e93d5a91c1)", boundByController: true
I0903 05:05:40.128387       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is bound to claim azuredisk-9336/pvc-znzmh
I0903 05:05:40.128406       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: claim azuredisk-9336/pvc-znzmh not found
I0903 05:05:40.128414       1 pv_controller.go:1108] reclaimVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: policy is Delete
I0903 05:05:40.128434       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]]
I0903 05:05:40.128473       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1] started
I0903 05:05:40.142002       1 pv_controller.go:1341] isVolumeReleased[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is released
... skipping 3 lines ...
I0903 05:05:43.983055       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 15 items received
I0903 05:05:45.377329       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1
I0903 05:05:45.377365       1 pv_controller.go:1436] volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" deleted
I0903 05:05:45.377380       1 pv_controller.go:1284] deleteVolumeOperation [pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: success
I0903 05:05:45.382394       1 pv_protection_controller.go:205] Got event on PV pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1
I0903 05:05:45.382443       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1" with version 2375
I0903 05:05:45.382605       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: phase: Failed, bound to: "azuredisk-9336/pvc-znzmh (uid: 1b4dea3d-b8bf-4332-a183-79e93d5a91c1)", boundByController: true
I0903 05:05:45.382697       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: volume is bound to claim azuredisk-9336/pvc-znzmh
I0903 05:05:45.382780       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: claim azuredisk-9336/pvc-znzmh not found
I0903 05:05:45.382877       1 pv_controller.go:1108] reclaimVolume[pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1]: policy is Delete
I0903 05:05:45.382993       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]]
I0903 05:05:45.383086       1 pv_controller.go:1764] operation "delete-pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1[6843a072-f98f-4d2c-b5d1-32dc3293b438]" is already running, skipping
I0903 05:05:45.383194       1 pv_protection_controller.go:125] Processing PV pvc-1b4dea3d-b8bf-4332-a183-79e93d5a91c1
... skipping 37 lines ...
I0903 05:05:48.763989       1 disruption.go:418] No matching pdb for pod "azuredisk-volume-tester-f8vjl-6d5865f9bb-pzvkz"
I0903 05:05:48.767548       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-2205/azuredisk-volume-tester-f8vjl-6d5865f9bb"
I0903 05:05:48.767919       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-f8vjl-6d5865f9bb" (16.573849ms)
I0903 05:05:48.768115       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-f8vjl-6d5865f9bb", timestamp:time.Time{wall:0xc0bcd60b2cce6601, ext:771657861437, loc:(*time.Location)(0x731ea80)}}
I0903 05:05:48.768374       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-f8vjl-6d5865f9bb, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0903 05:05:48.768901       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl" duration="22.176632ms"
I0903 05:05:48.769109       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-f8vjl\": the object has been modified; please apply your changes to the latest version and try again"
I0903 05:05:48.769474       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl" startTime="2022-09-03 05:05:48.769448499 +0000 UTC m=+771.675586003"
I0903 05:05:48.771268       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-f8vjl" timed out (false) [last progress check: 2022-09-03 05:05:48 +0000 UTC - now: 2022-09-03 05:05:48.771244525 +0000 UTC m=+771.677381929]
I0903 05:05:48.771596       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2205/pvc-r2l7t" with version 2398
I0903 05:05:48.771620       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2205/pvc-r2l7t]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0903 05:05:48.771644       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-2205/pvc-r2l7t]: no volume found
I0903 05:05:48.771653       1 pv_controller.go:1446] provisionClaim[azuredisk-2205/pvc-r2l7t]: started
... skipping 99 lines ...
I0903 05:05:51.845941       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" to node "capz-m0dmyx-md-0-pj7kz".
I0903 05:05:51.878882       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" lun 0 to node "capz-m0dmyx-md-0-pj7kz".
I0903 05:05:51.878923       1 azure_controller_standard.go:93] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - attach disk(capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) with DiskEncryptionSetID()
I0903 05:05:52.331674       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
I0903 05:05:52.776470       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9336
I0903 05:05:52.826117       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9336, name default-token-csbqw, uid 20233563-061d-4c41-9cc0-e0ad5b703578, event type delete
E0903 05:05:52.837093       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9336/default: secrets "default-token-jcflj" is forbidden: unable to create new content in namespace azuredisk-9336 because it is being terminated
I0903 05:05:52.845802       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-2d82q.171141156fcfd5d4, uid e00fbe4d-7e94-4fa6-8a76-d28318761c6a, event type delete
I0903 05:05:52.851885       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-2d82q.17114117f2c3cd8a, uid d596cb6b-c399-46dc-92d5-3b6a07abe92a, event type delete
I0903 05:05:52.856211       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-2d82q.17114119b602a8c9, uid 1532ff46-94fc-4270-b721-4bef380237dd, event type delete
I0903 05:05:52.858882       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-2d82q.17114119b8ed63e2, uid cf83c6ac-e835-4307-b16a-fd3d697d4da3, event type delete
I0903 05:05:52.862439       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-2d82q.17114119bf1037f6, uid 78bc151b-cbe3-46f2-ad40-1e31dca3e24b, event type delete
I0903 05:05:52.866107       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-2d82q.171141415c2570d7, uid afece4c8-25bd-4c05-917f-d83a6cb7fa56, event type delete
... skipping 165 lines ...
I0903 05:06:14.108249       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m"
I0903 05:06:14.111216       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl"
I0903 05:06:14.111506       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl" duration="6.012788ms"
I0903 05:06:14.111600       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl" startTime="2022-09-03 05:06:14.111581145 +0000 UTC m=+797.017718549"
I0903 05:06:14.113400       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-f8vjl" for a progress check after 596s
I0903 05:06:14.113573       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8vjl" duration="1.938828ms"
W0903 05:06:14.130812       1 reconciler.go:385] Multi-Attach error for volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84") from node "capz-m0dmyx-md-0-4f2wx" Volume is already used by pods azuredisk-2205/azuredisk-volume-tester-f8vjl-6d5865f9bb-pzvkz on node capz-m0dmyx-md-0-pj7kz
I0903 05:06:14.131206       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84\" Volume is already used by pod(s) azuredisk-volume-tester-f8vjl-6d5865f9bb-pzvkz"
I0903 05:06:17.240403       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0903 05:06:17.779594       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="75.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:50658" resp=200
I0903 05:06:23.216265       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-4f2wx"
I0903 05:06:25.032540       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:06:25.130953       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:06:25.131231       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" with version 2408
... skipping 551 lines ...
I0903 05:09:03.391386       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-m0dmyx-md-0-4f2wx" succeeded. VolumesAttached: []
I0903 05:09:03.391461       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84") on node "capz-m0dmyx-md-0-4f2wx" 
I0903 05:09:03.393356       1 operation_generator.go:1558] Verified volume is safe to detach for volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84") on node "capz-m0dmyx-md-0-4f2wx" 
I0903 05:09:03.408792       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 from node "capz-m0dmyx-md-0-4f2wx"
I0903 05:09:03.409006       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84"
I0903 05:09:03.409156       1 azure_controller_standard.go:166] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84)
I0903 05:09:03.429592       1 pv_controller.go:1260] deletion of volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 05:09:03.429615       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: set phase Failed
I0903 05:09:03.429627       1 pv_controller.go:858] updating PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: set phase Failed
I0903 05:09:03.433073       1 pv_protection_controller.go:205] Got event on PV pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84
I0903 05:09:03.433290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" with version 2775
I0903 05:09:03.433385       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: phase: Failed, bound to: "azuredisk-2205/pvc-r2l7t (uid: 1acced9d-0375-4efe-b8b7-86ee1b02ae84)", boundByController: true
I0903 05:09:03.433465       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: volume is bound to claim azuredisk-2205/pvc-r2l7t
I0903 05:09:03.433492       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: claim azuredisk-2205/pvc-r2l7t not found
I0903 05:09:03.433533       1 pv_controller.go:1108] reclaimVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: policy is Delete
I0903 05:09:03.433550       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]]
I0903 05:09:03.433563       1 pv_controller.go:1764] operation "delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]" is already running, skipping
I0903 05:09:03.433818       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" with version 2775
I0903 05:09:03.433841       1 pv_controller.go:879] volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" entered phase "Failed"
I0903 05:09:03.433850       1 pv_controller.go:901] volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
E0903 05:09:03.433930       1 goroutinemap.go:150] Operation for "delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]" failed. No retries permitted until 2022-09-03 05:09:03.933902718 +0000 UTC m=+966.840040222 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:09:03.434222       1 event.go:291] "Event occurred" object="pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:09:05.254135       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-4f2wx ReadyCondition updated. Updating timestamp.
I0903 05:09:07.779652       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="87.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36840" resp=200
I0903 05:09:09.975903       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:09:10.041705       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:09:10.087256       1 gc_controller.go:161] GC'ing orphaned
I0903 05:09:10.087288       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:09:10.138919       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:09:10.139038       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" with version 2775
I0903 05:09:10.139127       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: phase: Failed, bound to: "azuredisk-2205/pvc-r2l7t (uid: 1acced9d-0375-4efe-b8b7-86ee1b02ae84)", boundByController: true
I0903 05:09:10.139191       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: volume is bound to claim azuredisk-2205/pvc-r2l7t
I0903 05:09:10.139245       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: claim azuredisk-2205/pvc-r2l7t not found
I0903 05:09:10.139278       1 pv_controller.go:1108] reclaimVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: policy is Delete
I0903 05:09:10.139311       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]]
I0903 05:09:10.139339       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84] started
I0903 05:09:10.149345       1 pv_controller.go:1341] isVolumeReleased[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: volume is released
I0903 05:09:10.149367       1 pv_controller.go:1405] doDeleteVolume [pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]
I0903 05:09:10.149402       1 pv_controller.go:1260] deletion of volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) since it's in attaching or detaching state
I0903 05:09:10.149417       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: set phase Failed
I0903 05:09:10.149429       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: phase Failed already set
E0903 05:09:10.149460       1 goroutinemap.go:150] Operation for "delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]" failed. No retries permitted until 2022-09-03 05:09:11.149437331 +0000 UTC m=+974.055574835 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) since it's in attaching or detaching state"
I0903 05:09:11.243657       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:09:12.470669       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-control-plane-csq9p"
I0903 05:09:14.518456       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) returned with <nil>
I0903 05:09:14.518685       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84) succeeded
I0903 05:09:14.518767       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84 was detached from node:capz-m0dmyx-md-0-4f2wx
I0903 05:09:14.518864       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84") on node "capz-m0dmyx-md-0-4f2wx" 
... skipping 3 lines ...
I0903 05:09:22.956401       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0903 05:09:24.195192       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 16 items received
I0903 05:09:24.983034       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0903 05:09:25.042360       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:09:25.140016       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:09:25.140080       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" with version 2775
I0903 05:09:25.140121       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: phase: Failed, bound to: "azuredisk-2205/pvc-r2l7t (uid: 1acced9d-0375-4efe-b8b7-86ee1b02ae84)", boundByController: true
I0903 05:09:25.140164       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: volume is bound to claim azuredisk-2205/pvc-r2l7t
I0903 05:09:25.140183       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: claim azuredisk-2205/pvc-r2l7t not found
I0903 05:09:25.140195       1 pv_controller.go:1108] reclaimVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: policy is Delete
I0903 05:09:25.140211       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]]
I0903 05:09:25.140238       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84] started
I0903 05:09:25.146730       1 pv_controller.go:1341] isVolumeReleased[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: volume is released
... skipping 4 lines ...
I0903 05:09:30.416415       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84
I0903 05:09:30.416452       1 pv_controller.go:1436] volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" deleted
I0903 05:09:30.416466       1 pv_controller.go:1284] deleteVolumeOperation [pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: success
I0903 05:09:30.435958       1 pv_protection_controller.go:205] Got event on PV pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84
I0903 05:09:30.435993       1 pv_protection_controller.go:125] Processing PV pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84
I0903 05:09:30.436367       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" with version 2815
I0903 05:09:30.436467       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: phase: Failed, bound to: "azuredisk-2205/pvc-r2l7t (uid: 1acced9d-0375-4efe-b8b7-86ee1b02ae84)", boundByController: true
I0903 05:09:30.436522       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: volume is bound to claim azuredisk-2205/pvc-r2l7t
I0903 05:09:30.436542       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: claim azuredisk-2205/pvc-r2l7t not found
I0903 05:09:30.436551       1 pv_controller.go:1108] reclaimVolume[pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84]: policy is Delete
I0903 05:09:30.436568       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84[c3c6aad0-8b86-46f9-a5fd-84c7d4d1bbc4]]
I0903 05:09:30.436617       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84] started
I0903 05:09:30.444012       1 pv_controller.go:1244] Volume "pvc-1acced9d-0375-4efe-b8b7-86ee1b02ae84" is already being deleted
... skipping 79 lines ...
I0903 05:09:36.183338       1 pv_controller.go:1039] volume "pvc-adf93fc8-31f4-4a5e-9996-676eda0d7293" status after binding: phase: Bound, bound to: "azuredisk-3410/pvc-t6bht (uid: adf93fc8-31f4-4a5e-9996-676eda0d7293)", boundByController: true
I0903 05:09:36.183352       1 pv_controller.go:1040] claim "azuredisk-3410/pvc-t6bht" status after binding: phase: Bound, bound to: "pvc-adf93fc8-31f4-4a5e-9996-676eda0d7293", bindCompleted: true, boundByController: true
I0903 05:09:37.487226       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 15 items received
I0903 05:09:37.736031       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2205
I0903 05:09:37.767700       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2205, name default-token-88hkx, uid a334f072-d4dd-419b-810f-ad01aa9b4789, event type delete
I0903 05:09:37.782519       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="60.001µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:37006" resp=200
E0903 05:09:37.787148       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2205/default: secrets "default-token-8jxdn" is forbidden: unable to create new content in namespace azuredisk-2205 because it is being terminated
I0903 05:09:37.790154       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m.17114155c3d8293c, uid 3ed5536d-ddc2-45ed-8968-9d72c1f9c0c1, event type delete
I0903 05:09:37.794293       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m.17114155c643e976, uid 05eaa1a3-e8b0-453d-9e3d-6c61278d0e1c, event type delete
I0903 05:09:37.799340       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m.17114164f34c1247, uid 993a2777-0796-4d1c-b695-90f9d0d72d95, event type delete
I0903 05:09:37.802539       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m.1711417268c5d8f8, uid b91037c6-169b-4912-b37c-a03a5a7eeca3, event type delete
I0903 05:09:37.806336       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m.171141750003c594, uid d263e761-ed9a-47bb-bd64-481dd7c84e04, event type delete
I0903 05:09:37.810172       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2205, name azuredisk-volume-tester-f8vjl-6d5865f9bb-npq7m.17114175029f4e09, uid 96397a22-6feb-4526-9e76-b0d0034e5137, event type delete
... skipping 200 lines ...
I0903 05:09:52.416140       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-jvnph" with version 2931
I0903 05:09:52.415972       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1387/pvc-jvnph"
I0903 05:09:52.417622       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b StorageAccountType:StandardSSD_LRS Size:10
I0903 05:09:53.407854       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0903 05:09:53.457954       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-t6bht.17114184d0870ab0, uid 2dc3491b-239c-4784-b901-a762b2f315d2, event type delete
I0903 05:09:53.484433       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-5gc8k, uid 3d693c60-9d01-4c0b-a3a7-88cb1bdd0c97, event type delete
E0903 05:09:53.500823       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-2pnqn" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0903 05:09:53.506691       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 31d10b6a-4a67-45c9-9dda-1b0d83beac61, event type delete
I0903 05:09:53.513121       1 publisher.go:181] Finished syncing namespace "azuredisk-3410" (6.214689ms)
I0903 05:09:53.529522       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0903 05:09:53.529733       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid ffa91f1d-be5d-443b-ac51-5909b854df41, event type delete
I0903 05:09:53.529756       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (2.5µs)
I0903 05:09:53.555938       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3410, estimate: 0, errors: <nil>
... skipping 556 lines ...
I0903 05:10:32.084391       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:10:32.084434       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:10:32.084470       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
I0903 05:10:32.084498       1 pv_controller.go:1764] operation "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" is already running, skipping
I0903 05:10:32.085900       1 pv_controller.go:1341] isVolumeReleased[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is released
I0903 05:10:32.085917       1 pv_controller.go:1405] doDeleteVolume [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]
I0903 05:10:32.125474       1 pv_controller.go:1260] deletion of volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:10:32.125500       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: set phase Failed
I0903 05:10:32.125508       1 pv_controller.go:858] updating PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: set phase Failed
I0903 05:10:32.129441       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:10:32.129473       1 pv_controller.go:879] volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" entered phase "Failed"
I0903 05:10:32.129482       1 pv_controller.go:901] volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:10:32.129878       1 event.go:291] "Event occurred" object="pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:10:32.130400       1 pv_protection_controller.go:205] Got event on PV pvc-0075905b-eb04-48f3-8bce-d66e04d8702b
I0903 05:10:32.130476       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:10:32.130505       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:10:32.130573       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:10:32.130607       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:10:32.130630       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:10:32.130643       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
E0903 05:10:32.130719       1 goroutinemap.go:150] Operation for "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" failed. No retries permitted until 2022-09-03 05:10:32.629560767 +0000 UTC m=+1055.535698271 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:10:32.130779       1 pv_controller.go:1766] operation "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" postponed due to exponential backoff
I0903 05:10:37.780323       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="62.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:52754" resp=200
I0903 05:10:37.957703       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0903 05:10:39.978758       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:10:40.052940       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:10:40.144174       1 pv_controller_base.go:528] resyncing PV controller
... skipping 9 lines ...
I0903 05:10:40.144772       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: volume is bound to claim azuredisk-1387/pvc-7nns6
I0903 05:10:40.144806       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: claim azuredisk-1387/pvc-7nns6 found: phase: Bound, bound to: "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a", bindCompleted: true, boundByController: true
I0903 05:10:40.144816       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: all is bound
I0903 05:10:40.144822       1 pv_controller.go:858] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: set phase Bound
I0903 05:10:40.144830       1 pv_controller.go:861] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: phase Bound already set
I0903 05:10:40.144841       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:10:40.144860       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:10:40.144886       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:10:40.144904       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:10:40.144913       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:10:40.144931       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
I0903 05:10:40.144962       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b] started
I0903 05:10:40.145218       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-7nns6" with version 2953
... skipping 27 lines ...
I0903 05:10:40.145556       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-ngpdd] status: phase Bound already set
I0903 05:10:40.145564       1 pv_controller.go:1038] volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" bound to claim "azuredisk-1387/pvc-ngpdd"
I0903 05:10:40.145578       1 pv_controller.go:1039] volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-ngpdd (uid: f7aa3b29-02a0-45c7-8c25-d36b613b4f07)", boundByController: true
I0903 05:10:40.145591       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-ngpdd" status after binding: phase: Bound, bound to: "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07", bindCompleted: true, boundByController: true
I0903 05:10:40.153777       1 pv_controller.go:1341] isVolumeReleased[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is released
I0903 05:10:40.153797       1 pv_controller.go:1405] doDeleteVolume [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]
I0903 05:10:40.186940       1 pv_controller.go:1260] deletion of volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:10:40.186962       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: set phase Failed
I0903 05:10:40.186972       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase Failed already set
E0903 05:10:40.187055       1 goroutinemap.go:150] Operation for "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" failed. No retries permitted until 2022-09-03 05:10:41.18700641 +0000 UTC m=+1064.093143914 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:10:41.298567       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:10:42.145160       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 6 items received
I0903 05:10:42.525051       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
I0903 05:10:42.525101       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-36a731bd-4105-423c-91cb-0dd653cdce8a to the node "capz-m0dmyx-md-0-pj7kz" mounted false
I0903 05:10:42.525113       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07 to the node "capz-m0dmyx-md-0-pj7kz" mounted false
I0903 05:10:42.525122       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b to the node "capz-m0dmyx-md-0-pj7kz" mounted false
... skipping 28 lines ...
I0903 05:10:50.090399       1 gc_controller.go:161] GC'ing orphaned
I0903 05:10:50.090494       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:10:52.984283       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 30 items received
I0903 05:10:55.053147       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:10:55.144467       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:10:55.144553       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:10:55.144595       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:10:55.144626       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:10:55.144645       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:10:55.144653       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:10:55.144668       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
I0903 05:10:55.144685       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" with version 2965
I0903 05:10:55.144702       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07]: phase: Bound, bound to: "azuredisk-1387/pvc-ngpdd (uid: f7aa3b29-02a0-45c7-8c25-d36b613b4f07)", boundByController: true
... skipping 41 lines ...
I0903 05:10:55.144821       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: claim azuredisk-1387/pvc-7nns6 found: phase: Bound, bound to: "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a", bindCompleted: true, boundByController: true
I0903 05:10:55.145120       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: all is bound
I0903 05:10:55.145126       1 pv_controller.go:858] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: set phase Bound
I0903 05:10:55.145133       1 pv_controller.go:861] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: phase Bound already set
I0903 05:10:55.150851       1 pv_controller.go:1341] isVolumeReleased[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is released
I0903 05:10:55.150869       1 pv_controller.go:1405] doDeleteVolume [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]
I0903 05:10:55.172645       1 pv_controller.go:1260] deletion of volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:10:55.172663       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: set phase Failed
I0903 05:10:55.172671       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase Failed already set
E0903 05:10:55.172704       1 goroutinemap.go:150] Operation for "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" failed. No retries permitted until 2022-09-03 05:10:57.172678925 +0000 UTC m=+1080.078816429 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:10:57.778730       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="73.501µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:41404" resp=200
I0903 05:10:58.096304       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-36a731bd-4105-423c-91cb-0dd653cdce8a) returned with <nil>
I0903 05:10:58.096352       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-36a731bd-4105-423c-91cb-0dd653cdce8a) succeeded
I0903 05:10:58.096362       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-36a731bd-4105-423c-91cb-0dd653cdce8a was detached from node:capz-m0dmyx-md-0-pj7kz
I0903 05:10:58.096386       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-36a731bd-4105-423c-91cb-0dd653cdce8a") on node "capz-m0dmyx-md-0-pj7kz" 
I0903 05:10:58.131541       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07"
... skipping 15 lines ...
I0903 05:11:10.144810       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: all is bound
I0903 05:11:10.144818       1 pv_controller.go:858] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: set phase Bound
I0903 05:11:10.144828       1 pv_controller.go:861] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: phase Bound already set
I0903 05:11:10.144841       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:11:10.144844       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-1387/pvc-7nns6]: volume "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a" found: phase: Bound, bound to: "azuredisk-1387/pvc-7nns6 (uid: 36a731bd-4105-423c-91cb-0dd653cdce8a)", boundByController: true
I0903 05:11:10.144856       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-1387/pvc-7nns6]: claim is already correctly bound
I0903 05:11:10.144860       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:11:10.144866       1 pv_controller.go:1012] binding volume "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a" to claim "azuredisk-1387/pvc-7nns6"
I0903 05:11:10.144876       1 pv_controller.go:910] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: binding to "azuredisk-1387/pvc-7nns6"
I0903 05:11:10.144878       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:11:10.144896       1 pv_controller.go:922] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: already bound to "azuredisk-1387/pvc-7nns6"
I0903 05:11:10.144900       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:11:10.144904       1 pv_controller.go:858] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: set phase Bound
... skipping 30 lines ...
I0903 05:11:10.145173       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-ngpdd] status: phase Bound already set
I0903 05:11:10.145183       1 pv_controller.go:1038] volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" bound to claim "azuredisk-1387/pvc-ngpdd"
I0903 05:11:10.145198       1 pv_controller.go:1039] volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-ngpdd (uid: f7aa3b29-02a0-45c7-8c25-d36b613b4f07)", boundByController: true
I0903 05:11:10.145211       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-ngpdd" status after binding: phase: Bound, bound to: "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07", bindCompleted: true, boundByController: true
I0903 05:11:10.154721       1 pv_controller.go:1341] isVolumeReleased[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is released
I0903 05:11:10.154740       1 pv_controller.go:1405] doDeleteVolume [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]
I0903 05:11:10.176756       1 pv_controller.go:1260] deletion of volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:11:10.176777       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: set phase Failed
I0903 05:11:10.176785       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase Failed already set
E0903 05:11:10.176820       1 goroutinemap.go:150] Operation for "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" failed. No retries permitted until 2022-09-03 05:11:14.176794957 +0000 UTC m=+1097.082932361 (durationBeforeRetry 4s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:11:11.320431       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:11:12.990133       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
I0903 05:11:13.747046       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07) returned with <nil>
I0903 05:11:13.747092       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07) succeeded
I0903 05:11:13.747103       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07 was detached from node:capz-m0dmyx-md-0-pj7kz
I0903 05:11:13.747131       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07") on node "capz-m0dmyx-md-0-pj7kz" 
... skipping 9 lines ...
I0903 05:11:25.145323       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: volume is bound to claim azuredisk-1387/pvc-7nns6
I0903 05:11:25.145360       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: claim azuredisk-1387/pvc-7nns6 found: phase: Bound, bound to: "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a", bindCompleted: true, boundByController: true
I0903 05:11:25.145396       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: all is bound
I0903 05:11:25.145418       1 pv_controller.go:858] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: set phase Bound
I0903 05:11:25.145441       1 pv_controller.go:861] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: phase Bound already set
I0903 05:11:25.145476       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:11:25.145562       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:11:25.145582       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:11:25.145605       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:11:25.145613       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:11:25.145631       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
I0903 05:11:25.145647       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" with version 2965
I0903 05:11:25.145691       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07]: phase: Bound, bound to: "azuredisk-1387/pvc-ngpdd (uid: f7aa3b29-02a0-45c7-8c25-d36b613b4f07)", boundByController: true
... skipping 34 lines ...
I0903 05:11:25.147959       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-ngpdd] status: phase Bound already set
I0903 05:11:25.147970       1 pv_controller.go:1038] volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" bound to claim "azuredisk-1387/pvc-ngpdd"
I0903 05:11:25.147987       1 pv_controller.go:1039] volume "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-ngpdd (uid: f7aa3b29-02a0-45c7-8c25-d36b613b4f07)", boundByController: true
I0903 05:11:25.148004       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-ngpdd" status after binding: phase: Bound, bound to: "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07", bindCompleted: true, boundByController: true
I0903 05:11:25.152462       1 pv_controller.go:1341] isVolumeReleased[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is released
I0903 05:11:25.152481       1 pv_controller.go:1405] doDeleteVolume [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]
I0903 05:11:25.152628       1 pv_controller.go:1260] deletion of volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) since it's in attaching or detaching state
I0903 05:11:25.152646       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: set phase Failed
I0903 05:11:25.152656       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase Failed already set
E0903 05:11:25.152738       1 goroutinemap.go:150] Operation for "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" failed. No retries permitted until 2022-09-03 05:11:33.152664396 +0000 UTC m=+1116.058801900 (durationBeforeRetry 8s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) since it's in attaching or detaching state"
I0903 05:11:27.779979       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="82.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:58918" resp=200
I0903 05:11:29.350452       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) returned with <nil>
I0903 05:11:29.350492       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b) succeeded
I0903 05:11:29.350667       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b was detached from node:capz-m0dmyx-md-0-pj7kz
I0903 05:11:29.350703       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b") on node "capz-m0dmyx-md-0-pj7kz" 
I0903 05:11:30.002985       1 controller.go:272] Triggering nodeSync
... skipping 13 lines ...
I0903 05:11:40.145961       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: volume is bound to claim azuredisk-1387/pvc-7nns6
I0903 05:11:40.146002       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: claim azuredisk-1387/pvc-7nns6 found: phase: Bound, bound to: "pvc-36a731bd-4105-423c-91cb-0dd653cdce8a", bindCompleted: true, boundByController: true
I0903 05:11:40.146041       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: all is bound
I0903 05:11:40.146062       1 pv_controller.go:858] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: set phase Bound
I0903 05:11:40.146087       1 pv_controller.go:861] updating PersistentVolume[pvc-36a731bd-4105-423c-91cb-0dd653cdce8a]: phase Bound already set
I0903 05:11:40.146120       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3054
I0903 05:11:40.146167       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:11:40.146217       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:11:40.146261       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:11:40.146262       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-ngpdd" with version 2968
I0903 05:11:40.146269       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:11:40.146281       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1387/pvc-ngpdd]: phase: Bound, bound to: "pvc-f7aa3b29-02a0-45c7-8c25-d36b613b4f07", bindCompleted: true, boundByController: true
I0903 05:11:40.146287       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
... skipping 41 lines ...
I0903 05:11:45.471234       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b
I0903 05:11:45.471300       1 pv_controller.go:1436] volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" deleted
I0903 05:11:45.471331       1 pv_controller.go:1284] deleteVolumeOperation [pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: success
I0903 05:11:45.479932       1 pv_protection_controller.go:205] Got event on PV pvc-0075905b-eb04-48f3-8bce-d66e04d8702b
I0903 05:11:45.479962       1 pv_protection_controller.go:125] Processing PV pvc-0075905b-eb04-48f3-8bce-d66e04d8702b
I0903 05:11:45.480360       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0075905b-eb04-48f3-8bce-d66e04d8702b" with version 3164
I0903 05:11:45.480398       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: phase: Failed, bound to: "azuredisk-1387/pvc-jvnph (uid: 0075905b-eb04-48f3-8bce-d66e04d8702b)", boundByController: true
I0903 05:11:45.480455       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: volume is bound to claim azuredisk-1387/pvc-jvnph
I0903 05:11:45.480482       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: claim azuredisk-1387/pvc-jvnph not found
I0903 05:11:45.480527       1 pv_controller.go:1108] reclaimVolume[pvc-0075905b-eb04-48f3-8bce-d66e04d8702b]: policy is Delete
I0903 05:11:45.480564       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]]
I0903 05:11:45.480578       1 pv_controller.go:1764] operation "delete-pvc-0075905b-eb04-48f3-8bce-d66e04d8702b[37f06a96-2cd4-479c-80bd-9b9aa9410830]" is already running, skipping
I0903 05:11:45.486154       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-0075905b-eb04-48f3-8bce-d66e04d8702b
... skipping 592 lines ...
I0903 05:12:50.378428       1 pv_controller.go:1108] reclaimVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: policy is Delete
I0903 05:12:50.378438       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]]
I0903 05:12:50.378445       1 pv_controller.go:1764] operation "delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]" is already running, skipping
I0903 05:12:50.378467       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841] started
I0903 05:12:50.381152       1 pv_controller.go:1341] isVolumeReleased[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: volume is released
I0903 05:12:50.381167       1 pv_controller.go:1405] doDeleteVolume [pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]
I0903 05:12:50.438137       1 pv_controller.go:1260] deletion of volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
I0903 05:12:50.438163       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: set phase Failed
I0903 05:12:50.438173       1 pv_controller.go:858] updating PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: set phase Failed
I0903 05:12:50.441763       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" with version 3338
I0903 05:12:50.441946       1 pv_controller.go:879] volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" entered phase "Failed"
I0903 05:12:50.442080       1 pv_controller.go:901] volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted
E0903 05:12:50.442183       1 goroutinemap.go:150] Operation for "delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]" failed. No retries permitted until 2022-09-03 05:12:50.942135131 +0000 UTC m=+1193.848272535 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:12:50.442463       1 event.go:291] "Event occurred" object="pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-pj7kz), could not be deleted"
I0903 05:12:50.442782       1 pv_protection_controller.go:205] Got event on PV pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841
I0903 05:12:50.442927       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" with version 3338
I0903 05:12:50.443058       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: phase: Failed, bound to: "azuredisk-4547/pvc-2pss4 (uid: f8fbcd2c-2545-4817-9e0d-e77ec6d6d841)", boundByController: true
I0903 05:12:50.443188       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: volume is bound to claim azuredisk-4547/pvc-2pss4
I0903 05:12:50.443301       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: claim azuredisk-4547/pvc-2pss4 not found
I0903 05:12:50.443402       1 pv_controller.go:1108] reclaimVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: policy is Delete
I0903 05:12:50.443524       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]]
I0903 05:12:50.443633       1 pv_controller.go:1766] operation "delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]" postponed due to exponential backoff
I0903 05:12:52.592439       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m0dmyx-md-0-pj7kz"
... skipping 38 lines ...
I0903 05:12:55.150626       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: volume is bound to claim azuredisk-4547/pvc-8gx84
I0903 05:12:55.150643       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: claim azuredisk-4547/pvc-8gx84 found: phase: Bound, bound to: "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89", bindCompleted: true, boundByController: true
I0903 05:12:55.150660       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: all is bound
I0903 05:12:55.150668       1 pv_controller.go:858] updating PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: set phase Bound
I0903 05:12:55.150677       1 pv_controller.go:861] updating PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: phase Bound already set
I0903 05:12:55.150696       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" with version 3338
I0903 05:12:55.150716       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: phase: Failed, bound to: "azuredisk-4547/pvc-2pss4 (uid: f8fbcd2c-2545-4817-9e0d-e77ec6d6d841)", boundByController: true
I0903 05:12:55.150735       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: volume is bound to claim azuredisk-4547/pvc-2pss4
I0903 05:12:55.150753       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: claim azuredisk-4547/pvc-2pss4 not found
I0903 05:12:55.150760       1 pv_controller.go:1108] reclaimVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: policy is Delete
I0903 05:12:55.150778       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]]
I0903 05:12:55.150821       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841] started
I0903 05:12:55.157378       1 pv_controller.go:1341] isVolumeReleased[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: volume is released
I0903 05:12:55.157398       1 pv_controller.go:1405] doDeleteVolume [pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]
I0903 05:12:55.157433       1 pv_controller.go:1260] deletion of volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841) since it's in attaching or detaching state
I0903 05:12:55.157449       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: set phase Failed
I0903 05:12:55.157457       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: phase Failed already set
E0903 05:12:55.157496       1 goroutinemap.go:150] Operation for "delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]" failed. No retries permitted until 2022-09-03 05:12:56.15746546 +0000 UTC m=+1199.063602964 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841) since it's in attaching or detaching state"
I0903 05:12:55.298525       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-pj7kz ReadyCondition updated. Updating timestamp.
I0903 05:12:56.162877       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 05:12:57.779638       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.501µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:38276" resp=200
I0903 05:13:01.041351       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0903 05:13:06.445582       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0903 05:13:07.780067       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="100.702µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53296" resp=200
... skipping 14 lines ...
I0903 05:13:10.090369       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:13:10.096570       1 gc_controller.go:161] GC'ing orphaned
I0903 05:13:10.096594       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:13:10.151152       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:13:10.151221       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" with version 3338
I0903 05:13:10.151221       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-4547/pvc-8gx84" with version 3235
I0903 05:13:10.151255       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: phase: Failed, bound to: "azuredisk-4547/pvc-2pss4 (uid: f8fbcd2c-2545-4817-9e0d-e77ec6d6d841)", boundByController: true
I0903 05:13:10.151268       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-4547/pvc-8gx84]: phase: Bound, bound to: "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89", bindCompleted: true, boundByController: true
I0903 05:13:10.151292       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: volume is bound to claim azuredisk-4547/pvc-2pss4
I0903 05:13:10.151300       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-8gx84]: volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" found: phase: Bound, bound to: "azuredisk-4547/pvc-8gx84 (uid: 2a37474d-4985-4c47-aee6-d4c86be29b89)", boundByController: true
I0903 05:13:10.151310       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-8gx84]: claim is already correctly bound
I0903 05:13:10.151313       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: claim azuredisk-4547/pvc-2pss4 not found
I0903 05:13:10.151320       1 pv_controller.go:1012] binding volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" to claim "azuredisk-4547/pvc-8gx84"
... skipping 26 lines ...
I0903 05:13:15.345446       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841
I0903 05:13:15.345514       1 pv_controller.go:1436] volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" deleted
I0903 05:13:15.345526       1 pv_controller.go:1284] deleteVolumeOperation [pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: success
I0903 05:13:15.352897       1 pv_protection_controller.go:205] Got event on PV pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841
I0903 05:13:15.352925       1 pv_protection_controller.go:125] Processing PV pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841
I0903 05:13:15.353278       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" with version 3379
I0903 05:13:15.353336       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: phase: Failed, bound to: "azuredisk-4547/pvc-2pss4 (uid: f8fbcd2c-2545-4817-9e0d-e77ec6d6d841)", boundByController: true
I0903 05:13:15.353361       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: volume is bound to claim azuredisk-4547/pvc-2pss4
I0903 05:13:15.353394       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: claim azuredisk-4547/pvc-2pss4 not found
I0903 05:13:15.353404       1 pv_controller.go:1108] reclaimVolume[pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841]: policy is Delete
I0903 05:13:15.353419       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]]
I0903 05:13:15.353425       1 pv_controller.go:1764] operation "delete-pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841[2610b9b7-05cd-42bf-9391-471e7fb22628]" is already running, skipping
I0903 05:13:15.357848       1 pv_controller_base.go:235] volume "pvc-f8fbcd2c-2545-4817-9e0d-e77ec6d6d841" deleted
... skipping 44 lines ...
I0903 05:13:16.088329       1 pv_controller.go:1108] reclaimVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: policy is Delete
I0903 05:13:16.088340       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]]
I0903 05:13:16.088349       1 pv_controller.go:1764] operation "delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]" is already running, skipping
I0903 05:13:16.088375       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2a37474d-4985-4c47-aee6-d4c86be29b89] started
I0903 05:13:16.091528       1 pv_controller.go:1341] isVolumeReleased[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: volume is released
I0903 05:13:16.091625       1 pv_controller.go:1405] doDeleteVolume [pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]
I0903 05:13:16.091731       1 pv_controller.go:1260] deletion of volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89) since it's in attaching or detaching state
I0903 05:13:16.091747       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: set phase Failed
I0903 05:13:16.091811       1 pv_controller.go:858] updating PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: set phase Failed
I0903 05:13:16.094212       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" with version 3388
I0903 05:13:16.094570       1 pv_controller.go:879] volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" entered phase "Failed"
I0903 05:13:16.094658       1 pv_controller.go:901] volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89) since it's in attaching or detaching state
E0903 05:13:16.094814       1 goroutinemap.go:150] Operation for "delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]" failed. No retries permitted until 2022-09-03 05:13:16.594756482 +0000 UTC m=+1219.500893986 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89) since it's in attaching or detaching state"
I0903 05:13:16.095129       1 event.go:291] "Event occurred" object="pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89) since it's in attaching or detaching state"
I0903 05:13:16.095290       1 pv_protection_controller.go:205] Got event on PV pvc-2a37474d-4985-4c47-aee6-d4c86be29b89
I0903 05:13:16.095387       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" with version 3388
I0903 05:13:16.095511       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: phase: Failed, bound to: "azuredisk-4547/pvc-8gx84 (uid: 2a37474d-4985-4c47-aee6-d4c86be29b89)", boundByController: true
I0903 05:13:16.095603       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: volume is bound to claim azuredisk-4547/pvc-8gx84
I0903 05:13:16.095690       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: claim azuredisk-4547/pvc-8gx84 not found
I0903 05:13:16.095771       1 pv_controller.go:1108] reclaimVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: policy is Delete
I0903 05:13:16.095813       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]]
I0903 05:13:16.095899       1 pv_controller.go:1766] operation "delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]" postponed due to exponential backoff
I0903 05:13:17.778916       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="66.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:39834" resp=200
I0903 05:13:23.778737       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-pj7kz) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89) returned with <nil>
I0903 05:13:23.778778       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89) succeeded
I0903 05:13:23.778790       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89 was detached from node:capz-m0dmyx-md-0-pj7kz
I0903 05:13:23.778930       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89") on node "capz-m0dmyx-md-0-pj7kz" 
I0903 05:13:25.066230       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:13:25.151816       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:13:25.151904       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" with version 3388
I0903 05:13:25.152006       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: phase: Failed, bound to: "azuredisk-4547/pvc-8gx84 (uid: 2a37474d-4985-4c47-aee6-d4c86be29b89)", boundByController: true
I0903 05:13:25.152078       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: volume is bound to claim azuredisk-4547/pvc-8gx84
I0903 05:13:25.152101       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: claim azuredisk-4547/pvc-8gx84 not found
I0903 05:13:25.152145       1 pv_controller.go:1108] reclaimVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: policy is Delete
I0903 05:13:25.152198       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]]
I0903 05:13:25.152264       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2a37474d-4985-4c47-aee6-d4c86be29b89] started
I0903 05:13:25.159337       1 pv_controller.go:1341] isVolumeReleased[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: volume is released
... skipping 4 lines ...
I0903 05:13:30.366156       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89
I0903 05:13:30.366190       1 pv_controller.go:1436] volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" deleted
I0903 05:13:30.366204       1 pv_controller.go:1284] deleteVolumeOperation [pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: success
I0903 05:13:30.375338       1 pv_protection_controller.go:205] Got event on PV pvc-2a37474d-4985-4c47-aee6-d4c86be29b89
I0903 05:13:30.375369       1 pv_protection_controller.go:125] Processing PV pvc-2a37474d-4985-4c47-aee6-d4c86be29b89
I0903 05:13:30.375606       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" with version 3408
I0903 05:13:30.375645       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: phase: Failed, bound to: "azuredisk-4547/pvc-8gx84 (uid: 2a37474d-4985-4c47-aee6-d4c86be29b89)", boundByController: true
I0903 05:13:30.375672       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: volume is bound to claim azuredisk-4547/pvc-8gx84
I0903 05:13:30.375694       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: claim azuredisk-4547/pvc-8gx84 not found
I0903 05:13:30.375700       1 pv_controller.go:1108] reclaimVolume[pvc-2a37474d-4985-4c47-aee6-d4c86be29b89]: policy is Delete
I0903 05:13:30.375712       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]]
I0903 05:13:30.375717       1 pv_controller.go:1764] operation "delete-pvc-2a37474d-4985-4c47-aee6-d4c86be29b89[d7467d8e-6ba9-4037-ab8f-c90a25365a27]" is already running, skipping
I0903 05:13:30.384168       1 pv_controller_base.go:235] volume "pvc-2a37474d-4985-4c47-aee6-d4c86be29b89" deleted
... skipping 287 lines ...
I0903 05:13:37.254141       1 pv_controller.go:1039] volume "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-6glfd (uid: a8adc108-94da-4baf-a01d-f2bc59bc6ffe)", boundByController: true
I0903 05:13:37.254154       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-6glfd" status after binding: phase: Bound, bound to: "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe", bindCompleted: true, boundByController: true
I0903 05:13:37.540887       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0903 05:13:37.567571       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid d1ada246-05c7-4b91-b457-fff6f77c2dfa, event type delete
I0903 05:13:37.569333       1 publisher.go:181] Finished syncing namespace "azuredisk-7051" (1.709926ms)
I0903 05:13:37.601447       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-ttvln, uid bd929c6b-6e17-406c-a5bc-3a06921bbf26, event type delete
E0903 05:13:37.613313       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-ckj2w" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0903 05:13:37.642741       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0903 05:13:37.642951       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid de512507-ac5c-4bcf-9d29-21a21cd9d024, event type delete
I0903 05:13:37.643047       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (1.7µs)
I0903 05:13:37.684006       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-7578/azuredisk-volume-tester-rkp8n"
I0903 05:13:37.684144       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-rkp8n"
I0903 05:13:37.684207       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-rkp8n, PodDisruptionBudget controller will avoid syncing.
... skipping 388 lines ...
I0903 05:14:12.290101       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]]
I0903 05:14:12.290115       1 pv_controller.go:1764] operation "delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]" is already running, skipping
I0903 05:14:12.289239       1 pv_protection_controller.go:205] Got event on PV pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a
I0903 05:14:12.289746       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a] started
I0903 05:14:12.291773       1 pv_controller.go:1341] isVolumeReleased[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is released
I0903 05:14:12.291789       1 pv_controller.go:1405] doDeleteVolume [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]
I0903 05:14:12.316389       1 pv_controller.go:1260] deletion of volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 05:14:12.316410       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: set phase Failed
I0903 05:14:12.316419       1 pv_controller.go:858] updating PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: set phase Failed
I0903 05:14:12.323521       1 pv_protection_controller.go:205] Got event on PV pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a
I0903 05:14:12.323533       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3572
I0903 05:14:12.323742       1 pv_controller.go:879] volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" entered phase "Failed"
I0903 05:14:12.323836       1 pv_controller.go:901] volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
E0903 05:14:12.323967       1 goroutinemap.go:150] Operation for "delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]" failed. No retries permitted until 2022-09-03 05:14:12.823941377 +0000 UTC m=+1275.730078781 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:14:12.323598       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3572
I0903 05:14:12.324169       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase: Failed, bound to: "azuredisk-7578/pvc-rbkbw (uid: 0f4725bf-a369-4201-aaa3-431fdfd9b00a)", boundByController: true
I0903 05:14:12.324281       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is bound to claim azuredisk-7578/pvc-rbkbw
I0903 05:14:12.324398       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: claim azuredisk-7578/pvc-rbkbw not found
I0903 05:14:12.324528       1 pv_controller.go:1108] reclaimVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: policy is Delete
I0903 05:14:12.324647       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]]
I0903 05:14:12.324758       1 pv_controller.go:1766] operation "delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]" postponed due to exponential backoff
I0903 05:14:12.324474       1 event.go:291] "Event occurred" object="pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
... skipping 34 lines ...
I0903 05:14:23.631510       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a from node "capz-m0dmyx-md-0-4f2wx"
I0903 05:14:25.069074       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:14:25.154328       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:14:25.154677       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-6glfd" with version 3486
I0903 05:14:25.155038       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-6glfd]: phase: Bound, bound to: "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe", bindCompleted: true, boundByController: true
I0903 05:14:25.155072       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3572
I0903 05:14:25.155382       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase: Failed, bound to: "azuredisk-7578/pvc-rbkbw (uid: 0f4725bf-a369-4201-aaa3-431fdfd9b00a)", boundByController: true
I0903 05:14:25.155519       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is bound to claim azuredisk-7578/pvc-rbkbw
I0903 05:14:25.155641       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: claim azuredisk-7578/pvc-rbkbw not found
I0903 05:14:25.155657       1 pv_controller.go:1108] reclaimVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: policy is Delete
I0903 05:14:25.155674       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]]
I0903 05:14:25.155696       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe" with version 3484
I0903 05:14:25.155748       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: phase: Bound, bound to: "azuredisk-7578/pvc-6glfd (uid: a8adc108-94da-4baf-a01d-f2bc59bc6ffe)", boundByController: true
... skipping 39 lines ...
I0903 05:14:25.158921       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-n2qjm] status: phase Bound already set
I0903 05:14:25.159019       1 pv_controller.go:1038] volume "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7" bound to claim "azuredisk-7578/pvc-n2qjm"
I0903 05:14:25.159132       1 pv_controller.go:1039] volume "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-n2qjm (uid: ca045656-67ce-4948-96ea-e3da636ffcc7)", boundByController: true
I0903 05:14:25.159240       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-n2qjm" status after binding: phase: Bound, bound to: "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7", bindCompleted: true, boundByController: true
I0903 05:14:25.163186       1 pv_controller.go:1341] isVolumeReleased[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is released
I0903 05:14:25.163208       1 pv_controller.go:1405] doDeleteVolume [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]
I0903 05:14:25.186850       1 pv_controller.go:1260] deletion of volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 05:14:25.186898       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: set phase Failed
I0903 05:14:25.186922       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase Failed already set
E0903 05:14:25.187132       1 goroutinemap.go:150] Operation for "delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]" failed. No retries permitted until 2022-09-03 05:14:26.187004403 +0000 UTC m=+1289.093141907 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:14:25.316458       1 node_lifecycle_controller.go:1047] Node capz-m0dmyx-md-0-4f2wx ReadyCondition updated. Updating timestamp.
I0903 05:14:27.779673       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="63.609µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:35720" resp=200
I0903 05:14:30.097845       1 gc_controller.go:161] GC'ing orphaned
I0903 05:14:30.097879       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:14:37.779237       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.301µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:38042" resp=200
I0903 05:14:39.091450       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-ca045656-67ce-4948-96ea-e3da636ffcc7) returned with <nil>
... skipping 10 lines ...
I0903 05:14:40.157865       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: volume is bound to claim azuredisk-7578/pvc-n2qjm
I0903 05:14:40.157880       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: claim azuredisk-7578/pvc-n2qjm found: phase: Bound, bound to: "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7", bindCompleted: true, boundByController: true
I0903 05:14:40.157890       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: all is bound
I0903 05:14:40.157899       1 pv_controller.go:858] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: set phase Bound
I0903 05:14:40.157907       1 pv_controller.go:861] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: phase Bound already set
I0903 05:14:40.157922       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3572
I0903 05:14:40.157937       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase: Failed, bound to: "azuredisk-7578/pvc-rbkbw (uid: 0f4725bf-a369-4201-aaa3-431fdfd9b00a)", boundByController: true
I0903 05:14:40.157959       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is bound to claim azuredisk-7578/pvc-rbkbw
I0903 05:14:40.157975       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-6glfd" with version 3486
I0903 05:14:40.157991       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-6glfd]: phase: Bound, bound to: "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe", bindCompleted: true, boundByController: true
I0903 05:14:40.158021       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-6glfd]: volume "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe" found: phase: Bound, bound to: "azuredisk-7578/pvc-6glfd (uid: a8adc108-94da-4baf-a01d-f2bc59bc6ffe)", boundByController: true
I0903 05:14:40.158029       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-6glfd]: claim is already correctly bound
I0903 05:14:40.158037       1 pv_controller.go:1012] binding volume "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe" to claim "azuredisk-7578/pvc-6glfd"
... skipping 34 lines ...
I0903 05:14:40.158458       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: all is bound
I0903 05:14:40.158464       1 pv_controller.go:858] updating PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: set phase Bound
I0903 05:14:40.158471       1 pv_controller.go:861] updating PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: phase Bound already set
I0903 05:14:40.158489       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a] started
I0903 05:14:40.167359       1 pv_controller.go:1341] isVolumeReleased[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is released
I0903 05:14:40.167376       1 pv_controller.go:1405] doDeleteVolume [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]
I0903 05:14:40.190006       1 pv_controller.go:1260] deletion of volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted
I0903 05:14:40.190026       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: set phase Failed
I0903 05:14:40.190035       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase Failed already set
E0903 05:14:40.190150       1 goroutinemap.go:150] Operation for "delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]" failed. No retries permitted until 2022-09-03 05:14:42.190042702 +0000 UTC m=+1305.096180206 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/virtualMachines/capz-m0dmyx-md-0-4f2wx), could not be deleted"
I0903 05:14:41.508192       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0903 05:14:47.780515       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="79.802µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:56676" resp=200
I0903 05:14:50.004724       1 controller.go:272] Triggering nodeSync
I0903 05:14:50.004759       1 controller.go:291] nodeSync has been triggered
I0903 05:14:50.004769       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0903 05:14:50.004779       1 controller.go:790] Finished updateLoadBalancerHosts
... skipping 20 lines ...
I0903 05:14:55.158352       1 pv_controller.go:858] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: set phase Bound
I0903 05:14:55.158359       1 pv_controller.go:1012] binding volume "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7" to claim "azuredisk-7578/pvc-n2qjm"
I0903 05:14:55.158362       1 pv_controller.go:861] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: phase Bound already set
I0903 05:14:55.158368       1 pv_controller.go:910] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: binding to "azuredisk-7578/pvc-n2qjm"
I0903 05:14:55.158376       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3572
I0903 05:14:55.158390       1 pv_controller.go:922] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: already bound to "azuredisk-7578/pvc-n2qjm"
I0903 05:14:55.158395       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase: Failed, bound to: "azuredisk-7578/pvc-rbkbw (uid: 0f4725bf-a369-4201-aaa3-431fdfd9b00a)", boundByController: true
I0903 05:14:55.158398       1 pv_controller.go:858] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: set phase Bound
I0903 05:14:55.158407       1 pv_controller.go:861] updating PersistentVolume[pvc-ca045656-67ce-4948-96ea-e3da636ffcc7]: phase Bound already set
I0903 05:14:55.158416       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is bound to claim azuredisk-7578/pvc-rbkbw
I0903 05:14:55.158415       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-7578/pvc-n2qjm]: binding to "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7"
I0903 05:14:55.158434       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: claim azuredisk-7578/pvc-rbkbw not found
I0903 05:14:55.158441       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-7578/pvc-n2qjm]: already bound to "pvc-ca045656-67ce-4948-96ea-e3da636ffcc7"
... skipping 27 lines ...
I0903 05:14:55.158749       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: claim azuredisk-7578/pvc-6glfd found: phase: Bound, bound to: "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe", bindCompleted: true, boundByController: true
I0903 05:14:55.158764       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: all is bound
I0903 05:14:55.158769       1 pv_controller.go:858] updating PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: set phase Bound
I0903 05:14:55.158776       1 pv_controller.go:861] updating PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: phase Bound already set
I0903 05:14:55.165437       1 pv_controller.go:1341] isVolumeReleased[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is released
I0903 05:14:55.165454       1 pv_controller.go:1405] doDeleteVolume [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]
I0903 05:14:55.165501       1 pv_controller.go:1260] deletion of volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) since it's in attaching or detaching state
I0903 05:14:55.165512       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: set phase Failed
I0903 05:14:55.165523       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase Failed already set
E0903 05:14:55.165565       1 goroutinemap.go:150] Operation for "delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]" failed. No retries permitted until 2022-09-03 05:14:59.165539543 +0000 UTC m=+1322.071677047 (durationBeforeRetry 4s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) since it's in attaching or detaching state"
I0903 05:14:56.959964       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 3 items received
I0903 05:14:57.778455       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="180.003µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:37820" resp=200
I0903 05:15:05.142585       1 azure_controller_standard.go:184] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) returned with <nil>
I0903 05:15:05.142621       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a) succeeded
I0903 05:15:05.142631       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a was detached from node:capz-m0dmyx-md-0-4f2wx
I0903 05:15:05.142680       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a") on node "capz-m0dmyx-md-0-4f2wx" 
... skipping 2 lines ...
I0903 05:15:09.986097       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:15:10.070658       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0903 05:15:10.099708       1 gc_controller.go:161] GC'ing orphaned
I0903 05:15:10.099760       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0903 05:15:10.158542       1 pv_controller_base.go:528] resyncing PV controller
I0903 05:15:10.158695       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3572
I0903 05:15:10.158780       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase: Failed, bound to: "azuredisk-7578/pvc-rbkbw (uid: 0f4725bf-a369-4201-aaa3-431fdfd9b00a)", boundByController: true
I0903 05:15:10.158844       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is bound to claim azuredisk-7578/pvc-rbkbw
I0903 05:15:10.158895       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: claim azuredisk-7578/pvc-rbkbw not found
I0903 05:15:10.158916       1 pv_controller.go:1108] reclaimVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: policy is Delete
I0903 05:15:10.158951       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]]
I0903 05:15:10.159002       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe" with version 3484
I0903 05:15:10.159052       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8adc108-94da-4baf-a01d-f2bc59bc6ffe]: phase: Bound, bound to: "azuredisk-7578/pvc-6glfd (uid: a8adc108-94da-4baf-a01d-f2bc59bc6ffe)", boundByController: true
... skipping 48 lines ...
I0903 05:15:15.381949       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a
I0903 05:15:15.381989       1 pv_controller.go:1436] volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" deleted
I0903 05:15:15.382001       1 pv_controller.go:1284] deleteVolumeOperation [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: success
I0903 05:15:15.389646       1 pv_protection_controller.go:205] Got event on PV pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a
I0903 05:15:15.389696       1 pv_protection_controller.go:125] Processing PV pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a
I0903 05:15:15.390112       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" with version 3667
I0903 05:15:15.390149       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: phase: Failed, bound to: "azuredisk-7578/pvc-rbkbw (uid: 0f4725bf-a369-4201-aaa3-431fdfd9b00a)", boundByController: true
I0903 05:15:15.390270       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: volume is bound to claim azuredisk-7578/pvc-rbkbw
I0903 05:15:15.390297       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: claim azuredisk-7578/pvc-rbkbw not found
I0903 05:15:15.390381       1 pv_controller.go:1108] reclaimVolume[pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a]: policy is Delete
I0903 05:15:15.390405       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a[cb71f12d-1d1a-4db6-8ac7-664477e9a482]]
I0903 05:15:15.390479       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a] started
I0903 05:15:15.394938       1 pv_controller.go:1244] Volume "pvc-0f4725bf-a369-4201-aaa3-431fdfd9b00a" is already being deleted
... skipping 222 lines ...
I0903 05:15:44.579608       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7578, name pvc-6glfd.171141bcf1a497cc, uid c75530e2-59e7-4fba-a517-79c973552d03, event type delete
I0903 05:15:44.583297       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7578, name pvc-n2qjm.171141bc4cb08fd0, uid a3e6b611-3f2e-4be3-925a-d2eae5b40dac, event type delete
I0903 05:15:44.587299       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7578, name pvc-n2qjm.171141bce9f2c7bd, uid 47adcede-a318-4a13-b022-822b6388bb7e, event type delete
I0903 05:15:44.591780       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7578, name pvc-rbkbw.171141bc543ab7c5, uid 5406942d-b392-476f-a4a6-9f2cb1a875f0, event type delete
I0903 05:15:44.595019       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7578, name pvc-rbkbw.171141bceec7e564, uid b9b12143-7181-47da-9212-87e489e9865f, event type delete
I0903 05:15:44.631800       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7578, name default-token-nlbjd, uid bb1c2cd6-b577-48fb-add0-f1d1b7beb86f, event type delete
E0903 05:15:44.645037       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7578/default: secrets "default-token-wjqb5" is forbidden: unable to create new content in namespace azuredisk-7578 because it is being terminated
I0903 05:15:44.646607       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7578, name kube-root-ca.crt, uid 475e596a-09bc-4db4-8f61-2794c6a79661, event type delete
I0903 05:15:44.647863       1 publisher.go:181] Finished syncing namespace "azuredisk-7578" (1.189419ms)
I0903 05:15:44.671862       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7578/default), service account deleted, removing tokens
I0903 05:15:44.672023       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7578" (2.6µs)
I0903 05:15:44.671951       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7578, name default, uid b32439d7-c45e-4983-945a-5c1d8adc0ff8, event type delete
I0903 05:15:44.680340       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7578, estimate: 0, errors: <nil>
... skipping 566 lines ...
I0903 05:17:14.294714       1 stateful_set.go:410] Finished syncing statefulset "azuredisk-7886/azuredisk-volume-tester-cwrnm" (1.364519ms)
I0903 05:17:14.324961       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-m0dmyx-dynamic-pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74" to node "capz-m0dmyx-md-0-4f2wx".
I0903 05:17:14.371872       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74" lun 0 to node "capz-m0dmyx-md-0-4f2wx".
I0903 05:17:14.371910       1 azure_controller_standard.go:93] azureDisk - update(capz-m0dmyx): vm(capz-m0dmyx-md-0-4f2wx) - attach disk(capz-m0dmyx-dynamic-pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74) with DiskEncryptionSetID()
I0903 05:17:15.304087       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8666
I0903 05:17:15.393060       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8666, name default-token-2bb4w, uid a5ece3f6-5e0b-40fb-b442-685a98c6e00c, event type delete
E0903 05:17:15.405257       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8666/default: secrets "default-token-8pqmr" is forbidden: unable to create new content in namespace azuredisk-8666 because it is being terminated
I0903 05:17:15.411129       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-dbksk.171141db2de9291f, uid a71eefc1-fc99-4eab-82df-efac91db62df, event type delete
I0903 05:17:15.415253       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-dbksk.171141dc80ec0d33, uid a95b073a-4c56-4f39-907b-dfbccdd98b57, event type delete
I0903 05:17:15.419897       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-dbksk.171141dda4b7401f, uid 340b8711-416b-4409-b317-3e961d102a7c, event type delete
I0903 05:17:15.423590       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-dbksk.171141dda9d47130, uid 8930ebdd-7c7e-4a8f-a822-8dea1c25c044, event type delete
I0903 05:17:15.427132       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-dbksk.171141ddaf87edba, uid 4fd80325-0282-426f-ad34-fa59202addd7, event type delete
I0903 05:17:15.432490       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-dbksk.171141de2c480231, uid 3ac7093d-f8d5-4981-9dbe-64ebe05cbb18, event type delete
... skipping 236 lines ...
I0903 05:18:07.301797       1 stateful_set_control.go:443] StatefulSet azuredisk-7886/azuredisk-volume-tester-cwrnm is waiting for Pod azuredisk-volume-tester-cwrnm-0 to be Running and Ready
I0903 05:18:07.306121       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-7886/azuredisk-volume-tester-cwrnm-0"
I0903 05:18:07.306423       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-cwrnm-0"
I0903 05:18:07.306823       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-cwrnm-0, PodDisruptionBudget controller will avoid syncing.
I0903 05:18:07.306882       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-cwrnm-0"
I0903 05:18:07.306445       1 stateful_set.go:222] Pod azuredisk-volume-tester-cwrnm-0 updated, objectMeta {Name:azuredisk-volume-tester-cwrnm-0 GenerateName:azuredisk-volume-tester-cwrnm- Namespace:azuredisk-7886 SelfLink: UID:7799aca1-5b72-402d-a106-29bf74ee0304 ResourceVersion:4091 Generation:0 CreationTimestamp:2022-09-03 05:18:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5956761487052003905 controller-revision-hash:azuredisk-volume-tester-cwrnm-5ccc4f5bfd statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-cwrnm-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-cwrnm UID:db7c90bd-c93f-4c31-bf2c-acd8ef240b7a Controller:0xc002704977 BlockOwnerDeletion:0xc002704978}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 05:18:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db7c90bd-c93f-4c31-bf2c-acd8ef240b7a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]} -> {Name:azuredisk-volume-tester-cwrnm-0 GenerateName:azuredisk-volume-tester-cwrnm- Namespace:azuredisk-7886 SelfLink: UID:7799aca1-5b72-402d-a106-29bf74ee0304 ResourceVersion:4093 Generation:0 CreationTimestamp:2022-09-03 05:18:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5956761487052003905 controller-revision-hash:azuredisk-volume-tester-cwrnm-5ccc4f5bfd statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-cwrnm-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-cwrnm UID:db7c90bd-c93f-4c31-bf2c-acd8ef240b7a Controller:0xc0029c4557 BlockOwnerDeletion:0xc0029c4558}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 05:18:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db7c90bd-c93f-4c31-bf2c-acd8ef240b7a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]}.
W0903 05:18:07.313397       1 reconciler.go:344] Multi-Attach error for volume "pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m0dmyx/providers/Microsoft.Compute/disks/capz-m0dmyx-dynamic-pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74") from node "capz-m0dmyx-md-0-pj7kz" Volume is already exclusively attached to node capz-m0dmyx-md-0-4f2wx and can't be attached to another
I0903 05:18:07.313824       1 event.go:291] "Event occurred" object="azuredisk-7886/azuredisk-volume-tester-cwrnm-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-1c09fddf-669a-4491-b1ac-50474d0bbd74\" Volume is already exclusively attached to one node and can't be attached to another"
I0903 05:18:07.323599       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-cwrnm-0"
I0903 05:18:07.323802       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-cwrnm-0, PodDisruptionBudget controller will avoid syncing.
I0903 05:18:07.323943       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-cwrnm-0"
I0903 05:18:07.323790       1 stateful_set.go:222] Pod azuredisk-volume-tester-cwrnm-0 updated, objectMeta {Name:azuredisk-volume-tester-cwrnm-0 GenerateName:azuredisk-volume-tester-cwrnm- Namespace:azuredisk-7886 SelfLink: UID:7799aca1-5b72-402d-a106-29bf74ee0304 ResourceVersion:4093 Generation:0 CreationTimestamp:2022-09-03 05:18:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5956761487052003905 controller-revision-hash:azuredisk-volume-tester-cwrnm-5ccc4f5bfd statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-cwrnm-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-cwrnm UID:db7c90bd-c93f-4c31-bf2c-acd8ef240b7a Controller:0xc0029c4557 BlockOwnerDeletion:0xc0029c4558}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 05:18:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db7c90bd-c93f-4c31-bf2c-acd8ef240b7a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]} -> {Name:azuredisk-volume-tester-cwrnm-0 GenerateName:azuredisk-volume-tester-cwrnm- Namespace:azuredisk-7886 SelfLink: UID:7799aca1-5b72-402d-a106-29bf74ee0304 ResourceVersion:4097 Generation:0 CreationTimestamp:2022-09-03 05:18:07 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5956761487052003905 controller-revision-hash:azuredisk-volume-tester-cwrnm-5ccc4f5bfd statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-cwrnm-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-cwrnm UID:db7c90bd-c93f-4c31-bf2c-acd8ef240b7a Controller:0xc0029c4a17 BlockOwnerDeletion:0xc0029c4a18}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-03 05:18:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"db7c90bd-c93f-4c31-bf2c-acd8ef240b7a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-03 05:18:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]}.
I0903 05:18:07.325692       1 stateful_set_control.go:113] StatefulSet azuredisk-7886/azuredisk-volume-tester-cwrnm pod status replicas=1 ready=0 current=1 updated=1
I0903 05:18:07.325713       1 stateful_set_control.go:121] StatefulSet azuredisk-7886/azuredisk-volume-tester-cwrnm revisions current=azuredisk-volume-tester-cwrnm-5ccc4f5bfd update=azuredisk-volume-tester-cwrnm-5ccc4f5bfd
... skipping 248 lines ...
I0903 05:19:18.565490       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8470" (3.7µs)
2022/09/03 05:19:21 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1337.316 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-rk5jp, container manager
STEP: Dumping workload cluster default/capz-m0dmyx logs
Sep  3 05:20:50.882: INFO: Collecting logs for Linux node capz-m0dmyx-control-plane-csq9p in cluster capz-m0dmyx in namespace default

Sep  3 05:21:50.883: INFO: Collecting boot logs for AzureMachine capz-m0dmyx-control-plane-csq9p

Failed to get logs for machine capz-m0dmyx-control-plane-ksrtf, cluster default/capz-m0dmyx: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 05:21:52.178: INFO: Collecting logs for Linux node capz-m0dmyx-md-0-4f2wx in cluster capz-m0dmyx in namespace default

Sep  3 05:22:52.179: INFO: Collecting boot logs for AzureMachine capz-m0dmyx-md-0-4f2wx

Failed to get logs for machine capz-m0dmyx-md-0-54f7c54db7-gqvkj, cluster default/capz-m0dmyx: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  3 05:22:52.774: INFO: Collecting logs for Linux node capz-m0dmyx-md-0-pj7kz in cluster capz-m0dmyx in namespace default

Sep  3 05:23:52.777: INFO: Collecting boot logs for AzureMachine capz-m0dmyx-md-0-pj7kz

Failed to get logs for machine capz-m0dmyx-md-0-54f7c54db7-xg5vb, cluster default/capz-m0dmyx: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-m0dmyx kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-cmj7w, container calico-kube-controllers
STEP: Fetching kube-system pod logs took 696.82779ms
STEP: Dumping workload cluster default/capz-m0dmyx Azure activity log
STEP: Collecting events for Pod kube-system/metrics-server-8c95fb79b-lspn8
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-m0dmyx-control-plane-csq9p, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-m0dmyx-control-plane-csq9p
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-m0dmyx-control-plane-csq9p, container kube-controller-manager
STEP: failed to find events of Pod "kube-apiserver-capz-m0dmyx-control-plane-csq9p"
STEP: Collecting events for Pod kube-system/etcd-capz-m0dmyx-control-plane-csq9p
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-m0dmyx-control-plane-csq9p
STEP: Creating log watcher for controller kube-system/kube-proxy-875wp, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-875wp
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-cmj7w
STEP: Creating log watcher for controller kube-system/kube-proxy-brl82, container kube-proxy
... skipping 11 lines ...
STEP: Collecting events for Pod kube-system/calico-node-c9gqb
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-xwkkj, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-xwkkj
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-nf8m4, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-m0dmyx-control-plane-csq9p, container etcd
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-nf8m4
STEP: failed to find events of Pod "etcd-capz-m0dmyx-control-plane-csq9p"
STEP: failed to find events of Pod "kube-scheduler-capz-m0dmyx-control-plane-csq9p"
STEP: failed to find events of Pod "kube-controller-manager-capz-m0dmyx-control-plane-csq9p"
STEP: Fetching activity logs took 5.991175918s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-m0dmyx" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...