This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-04 04:39
Elapsed50m59s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 631 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 137 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-fvszkt-kubeconfig; do sleep 1; done"
capz-fvszkt-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-fvszkt-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-fvszkt-control-plane-wnbrq   NotReady   control-plane,master   7s    v1.21.15-rc.0.4+2fef630dd216dd
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-fvszkt-control-plane-wnbrq condition met
node/capz-fvszkt-md-0-jvr4s condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 04:57:11.068: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7cq98" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  4 04:57:11.172: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 103.894887ms
Sep  4 04:57:13.277: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2094149s
Sep  4 04:57:15.382: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314378478s
Sep  4 04:57:17.487: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418581341s
Sep  4 04:57:19.592: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523885237s
Sep  4 04:57:21.697: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 10.6289117s
... skipping 2 lines ...
Sep  4 04:57:28.014: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 16.946024097s
Sep  4 04:57:30.118: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 19.049843599s
Sep  4 04:57:32.223: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 21.154810508s
Sep  4 04:57:34.334: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Pending", Reason="", readiness=false. Elapsed: 23.266151003s
Sep  4 04:57:36.446: INFO: Pod "azuredisk-volume-tester-7cq98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.378155101s
STEP: Saw pod success
Sep  4 04:57:36.446: INFO: Pod "azuredisk-volume-tester-7cq98" satisfied condition "Succeeded or Failed"
Sep  4 04:57:36.446: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-7cq98"
Sep  4 04:57:36.569: INFO: Pod azuredisk-volume-tester-7cq98 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-7cq98 in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 04:57:36.994: INFO: deleting PVC "azuredisk-8081"/"pvc-kh7dr"
Sep  4 04:57:36.994: INFO: Deleting PersistentVolumeClaim "pvc-kh7dr"
STEP: waiting for claim's PV "pvc-643af74e-feff-4184-a1b2-4748a80ef477" to be deleted
Sep  4 04:57:37.101: INFO: Waiting up to 10m0s for PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 to get deleted
Sep  4 04:57:37.220: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Released (119.515414ms)
Sep  4 04:57:42.328: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (5.226902881s)
Sep  4 04:57:47.436: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (10.334733858s)
Sep  4 04:57:52.541: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (15.440410875s)
Sep  4 04:57:57.650: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (20.549207606s)
Sep  4 04:58:02.755: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (25.654333383s)
Sep  4 04:58:07.860: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (30.758901264s)
Sep  4 04:58:12.964: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 found and phase=Failed (35.863233072s)
Sep  4 04:58:18.068: INFO: PersistentVolume pvc-643af74e-feff-4184-a1b2-4748a80ef477 was removed
Sep  4 04:58:18.068: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  4 04:58:18.172: INFO: Claim "azuredisk-8081" in namespace "pvc-kh7dr" doesn't exist in the system
Sep  4 04:58:18.172: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-mrcv7
Sep  4 04:58:18.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  4 04:58:37.428: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-722zk"
Sep  4 04:58:37.535: INFO: Error getting logs for pod azuredisk-volume-tester-722zk: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-722zk)
STEP: Deleting pod azuredisk-volume-tester-722zk in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 04:58:37.918: INFO: deleting PVC "azuredisk-5466"/"pvc-njkf6"
Sep  4 04:58:37.918: INFO: Deleting PersistentVolumeClaim "pvc-njkf6"
STEP: waiting for claim's PV "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" to be deleted
Sep  4 04:58:38.024: INFO: Waiting up to 10m0s for PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 to get deleted
Sep  4 04:58:38.201: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Bound (176.754312ms)
Sep  4 04:58:43.306: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (5.281911862s)
Sep  4 04:58:48.414: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (10.389315464s)
Sep  4 04:58:53.571: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (15.547002541s)
Sep  4 04:58:58.680: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (20.655088658s)
Sep  4 04:59:03.783: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (25.758950508s)
Sep  4 04:59:08.892: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (30.867625479s)
Sep  4 04:59:13.998: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 found and phase=Failed (35.973707322s)
Sep  4 04:59:19.106: INFO: PersistentVolume pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 was removed
Sep  4 04:59:19.106: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  4 04:59:19.209: INFO: Claim "azuredisk-5466" in namespace "pvc-njkf6" doesn't exist in the system
Sep  4 04:59:19.210: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-nqcnk
Sep  4 04:59:19.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 04:59:21.192: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-gdlkx" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  4 04:59:21.295: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 103.297752ms
Sep  4 04:59:23.401: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208420821s
Sep  4 04:59:25.506: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.313477648s
Sep  4 04:59:27.611: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41851677s
Sep  4 04:59:29.715: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.523298646s
Sep  4 04:59:31.820: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.627654881s
Sep  4 04:59:33.924: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.731941349s
Sep  4 04:59:36.034: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.841985077s
Sep  4 04:59:38.144: INFO: Pod "azuredisk-volume-tester-gdlkx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.951834346s
STEP: Saw pod success
Sep  4 04:59:38.144: INFO: Pod "azuredisk-volume-tester-gdlkx" satisfied condition "Succeeded or Failed"
Sep  4 04:59:38.144: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-gdlkx"
Sep  4 04:59:38.292: INFO: Pod azuredisk-volume-tester-gdlkx has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-gdlkx in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 04:59:38.703: INFO: deleting PVC "azuredisk-2790"/"pvc-rtf9g"
Sep  4 04:59:38.703: INFO: Deleting PersistentVolumeClaim "pvc-rtf9g"
STEP: waiting for claim's PV "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" to be deleted
Sep  4 04:59:38.809: INFO: Waiting up to 10m0s for PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 to get deleted
Sep  4 04:59:38.968: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Bound (158.832454ms)
Sep  4 04:59:44.074: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Failed (5.264987468s)
Sep  4 04:59:49.183: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Failed (10.374316919s)
Sep  4 04:59:54.290: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Failed (15.481699584s)
Sep  4 04:59:59.401: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Failed (20.59232184s)
Sep  4 05:00:04.505: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Failed (25.696291563s)
Sep  4 05:00:09.617: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 found and phase=Failed (30.80784231s)
Sep  4 05:00:14.720: INFO: PersistentVolume pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 was removed
Sep  4 05:00:14.720: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  4 05:00:14.824: INFO: Claim "azuredisk-2790" in namespace "pvc-rtf9g" doesn't exist in the system
Sep  4 05:00:14.824: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-7lpjv
Sep  4 05:00:14.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  4 05:00:16.934: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-59x7b" in namespace "azuredisk-5356" to be "Error status code"
Sep  4 05:00:17.051: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 117.009687ms
Sep  4 05:00:19.156: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222126381s
Sep  4 05:00:21.261: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.327472615s
Sep  4 05:00:23.368: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433596434s
Sep  4 05:00:25.471: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53714115s
Sep  4 05:00:27.576: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641576945s
Sep  4 05:00:29.680: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.745619773s
Sep  4 05:00:31.790: INFO: Pod "azuredisk-volume-tester-59x7b": Phase="Failed", Reason="", readiness=false. Elapsed: 14.855674401s
STEP: Saw pod failure
Sep  4 05:00:31.790: INFO: Pod "azuredisk-volume-tester-59x7b" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  4 05:00:31.987: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-59x7b"
Sep  4 05:00:32.129: INFO: Pod azuredisk-volume-tester-59x7b has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-59x7b in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 05:00:32.519: INFO: deleting PVC "azuredisk-5356"/"pvc-gkbpt"
Sep  4 05:00:32.519: INFO: Deleting PersistentVolumeClaim "pvc-gkbpt"
STEP: waiting for claim's PV "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" to be deleted
Sep  4 05:00:32.627: INFO: Waiting up to 10m0s for PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 to get deleted
Sep  4 05:00:32.731: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (103.459669ms)
Sep  4 05:00:37.839: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (5.211932618s)
Sep  4 05:00:42.946: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (10.318624063s)
Sep  4 05:00:48.052: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (15.424913421s)
Sep  4 05:00:53.160: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (20.532606798s)
Sep  4 05:00:58.267: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (25.639756323s)
Sep  4 05:01:03.376: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (30.748909976s)
Sep  4 05:01:08.483: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (35.855939353s)
Sep  4 05:01:13.588: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 found and phase=Failed (40.960658966s)
Sep  4 05:01:18.692: INFO: PersistentVolume pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 was removed
Sep  4 05:01:18.692: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  4 05:01:18.795: INFO: Claim "azuredisk-5356" in namespace "pvc-gkbpt" doesn't exist in the system
Sep  4 05:01:18.795: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-w9ffm
Sep  4 05:01:18.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 54 lines ...
Sep  4 05:02:41.193: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Bound (10.316470264s)
Sep  4 05:02:46.298: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Bound (15.421317091s)
Sep  4 05:02:51.404: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Bound (20.527470955s)
Sep  4 05:02:56.509: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Bound (25.632093259s)
Sep  4 05:03:01.616: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Bound (30.739922094s)
Sep  4 05:03:06.720: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Bound (35.843681683s)
Sep  4 05:03:11.825: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Failed (40.948410383s)
Sep  4 05:03:16.931: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Failed (46.053934481s)
Sep  4 05:03:22.035: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Failed (51.158210474s)
Sep  4 05:03:27.142: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 found and phase=Failed (56.265109199s)
Sep  4 05:03:32.246: INFO: PersistentVolume pvc-294c705d-eae2-4f0f-855a-f9126bb56de3 was removed
Sep  4 05:03:32.246: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 05:03:32.349: INFO: Claim "azuredisk-5194" in namespace "pvc-sdm8p" doesn't exist in the system
Sep  4 05:03:32.349: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-bzr7l
Sep  4 05:03:32.455: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-2gpk2"
Sep  4 05:03:32.572: INFO: Pod azuredisk-volume-tester-2gpk2 has the following logs: 
... skipping 9 lines ...
Sep  4 05:03:43.340: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Bound (10.344121935s)
Sep  4 05:03:48.447: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Bound (15.451589718s)
Sep  4 05:03:53.554: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Bound (20.55807494s)
Sep  4 05:03:58.721: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Bound (25.725412491s)
Sep  4 05:04:03.829: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Bound (30.833328943s)
Sep  4 05:04:08.933: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Bound (35.937661579s)
Sep  4 05:04:14.040: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Failed (41.044428122s)
Sep  4 05:04:19.148: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Failed (46.15242896s)
Sep  4 05:04:24.256: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Failed (51.260324545s)
Sep  4 05:04:29.363: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Failed (56.367108409s)
Sep  4 05:04:34.466: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Failed (1m1.470277781s)
Sep  4 05:04:39.572: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 found and phase=Failed (1m6.576816211s)
Sep  4 05:04:44.675: INFO: PersistentVolume pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 was removed
Sep  4 05:04:44.675: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 05:04:44.779: INFO: Claim "azuredisk-5194" in namespace "pvc-9mkgb" doesn't exist in the system
Sep  4 05:04:44.779: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-7mvcz
Sep  4 05:04:44.931: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-562bv"
Sep  4 05:04:45.050: INFO: Pod azuredisk-volume-tester-562bv has the following logs: 
... skipping 8 lines ...
Sep  4 05:04:50.676: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Bound (5.206386836s)
Sep  4 05:04:55.784: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Bound (10.31425544s)
Sep  4 05:05:00.890: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Bound (15.420172953s)
Sep  4 05:05:05.994: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Bound (20.523995505s)
Sep  4 05:05:11.102: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Bound (25.631815836s)
Sep  4 05:05:16.277: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Bound (30.807414324s)
Sep  4 05:05:21.381: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Failed (35.910812513s)
Sep  4 05:05:26.486: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Failed (41.015685483s)
Sep  4 05:05:31.594: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Failed (46.123764863s)
Sep  4 05:05:36.701: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Failed (51.2311254s)
Sep  4 05:05:41.809: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 found and phase=Failed (56.338595704s)
Sep  4 05:05:46.916: INFO: PersistentVolume pvc-de334b68-9168-40d9-b091-5931e0467ee8 was removed
Sep  4 05:05:46.916: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 05:05:47.019: INFO: Claim "azuredisk-5194" in namespace "pvc-ghfr8" doesn't exist in the system
Sep  4 05:05:47.019: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-f9dhs
Sep  4 05:05:47.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 63 lines ...
Sep  4 05:08:41.842: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Bound (10.314336391s)
Sep  4 05:08:46.948: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Bound (15.421072989s)
Sep  4 05:08:52.053: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Bound (20.526117983s)
Sep  4 05:08:57.233: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Bound (25.705951995s)
Sep  4 05:09:02.337: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Bound (30.810291287s)
Sep  4 05:09:07.450: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Bound (35.922473521s)
Sep  4 05:09:12.554: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (41.026422278s)
Sep  4 05:09:17.657: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (46.129932418s)
Sep  4 05:09:22.765: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (51.237996274s)
Sep  4 05:09:27.872: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (56.345140104s)
Sep  4 05:09:32.982: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (1m1.454710335s)
Sep  4 05:09:38.089: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (1m6.562247478s)
Sep  4 05:09:43.196: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be found and phase=Failed (1m11.668824699s)
Sep  4 05:09:48.301: INFO: PersistentVolume pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be was removed
Sep  4 05:09:48.301: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  4 05:09:48.404: INFO: Claim "azuredisk-1353" in namespace "pvc-sqcdb" doesn't exist in the system
Sep  4 05:09:48.404: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-7z8s8
Sep  4 05:09:48.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 05:10:12.251: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-grrwp" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  4 05:10:12.354: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 103.784868ms
Sep  4 05:10:14.458: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207821853s
Sep  4 05:10:16.570: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319317414s
Sep  4 05:10:18.692: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441165418s
Sep  4 05:10:20.801: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.550125209s
Sep  4 05:10:22.910: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.6593848s
... skipping 8 lines ...
Sep  4 05:10:41.907: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 29.656639828s
Sep  4 05:10:44.017: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 31.766382813s
Sep  4 05:10:46.133: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 33.882276535s
Sep  4 05:10:48.244: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Pending", Reason="", readiness=false. Elapsed: 35.993579508s
Sep  4 05:10:50.355: INFO: Pod "azuredisk-volume-tester-grrwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.104062852s
STEP: Saw pod success
Sep  4 05:10:50.355: INFO: Pod "azuredisk-volume-tester-grrwp" satisfied condition "Succeeded or Failed"
Sep  4 05:10:50.355: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-grrwp"
Sep  4 05:10:50.474: INFO: Pod azuredisk-volume-tester-grrwp has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-grrwp in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 05:10:50.946: INFO: deleting PVC "azuredisk-59"/"pvc-g256d"
Sep  4 05:10:50.946: INFO: Deleting PersistentVolumeClaim "pvc-g256d"
STEP: waiting for claim's PV "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" to be deleted
Sep  4 05:10:51.052: INFO: Waiting up to 10m0s for PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 to get deleted
Sep  4 05:10:51.227: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (175.00959ms)
Sep  4 05:10:56.335: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (5.283067017s)
Sep  4 05:11:01.443: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (10.390603693s)
Sep  4 05:11:06.546: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (15.494220576s)
Sep  4 05:11:11.651: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (20.598913928s)
Sep  4 05:11:16.760: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (25.707278382s)
Sep  4 05:11:21.867: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (30.814357625s)
Sep  4 05:11:26.974: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 found and phase=Failed (35.92222944s)
Sep  4 05:11:32.079: INFO: PersistentVolume pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 was removed
Sep  4 05:11:32.079: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  4 05:11:32.183: INFO: Claim "azuredisk-59" in namespace "pvc-g256d" doesn't exist in the system
Sep  4 05:11:32.183: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-r7qh9
STEP: validating provisioned PV
STEP: checking the PV
... skipping 11 lines ...
STEP: checking the PV
Sep  4 05:11:43.441: INFO: deleting PVC "azuredisk-59"/"pvc-fbhmk"
Sep  4 05:11:43.441: INFO: Deleting PersistentVolumeClaim "pvc-fbhmk"
STEP: waiting for claim's PV "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" to be deleted
Sep  4 05:11:43.547: INFO: Waiting up to 10m0s for PersistentVolume pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e to get deleted
Sep  4 05:11:43.806: INFO: PersistentVolume pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e found and phase=Bound (259.57719ms)
Sep  4 05:11:48.914: INFO: PersistentVolume pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e found and phase=Failed (5.367393264s)
Sep  4 05:11:54.081: INFO: PersistentVolume pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e found and phase=Failed (10.534574298s)
Sep  4 05:11:59.188: INFO: PersistentVolume pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e found and phase=Failed (15.64158411s)
Sep  4 05:12:04.293: INFO: PersistentVolume pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e was removed
Sep  4 05:12:04.293: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  4 05:12:04.397: INFO: Claim "azuredisk-59" in namespace "pvc-fbhmk" doesn't exist in the system
Sep  4 05:12:04.397: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-9mmh6
Sep  4 05:12:04.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-59" for this suite.
... skipping 27 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 05:12:06.461: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vx2sj" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  4 05:12:06.648: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 186.838356ms
Sep  4 05:12:08.752: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.291000104s
Sep  4 05:12:10.862: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.400858014s
Sep  4 05:12:12.972: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.511230516s
Sep  4 05:12:15.083: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622468453s
Sep  4 05:12:17.195: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.733851101s
... skipping 8 lines ...
Sep  4 05:12:36.194: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 29.733551631s
Sep  4 05:12:38.305: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 31.844208339s
Sep  4 05:12:40.415: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 33.954505153s
Sep  4 05:12:42.527: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Pending", Reason="", readiness=false. Elapsed: 36.065593743s
Sep  4 05:12:44.637: INFO: Pod "azuredisk-volume-tester-vx2sj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.175786269s
STEP: Saw pod success
Sep  4 05:12:44.637: INFO: Pod "azuredisk-volume-tester-vx2sj" satisfied condition "Succeeded or Failed"
Sep  4 05:12:44.637: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-vx2sj"
Sep  4 05:12:44.752: INFO: Pod azuredisk-volume-tester-vx2sj has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.058147 seconds, 1.7GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  4 05:12:45.102: INFO: deleting PVC "azuredisk-2546"/"pvc-d47g5"
Sep  4 05:12:45.102: INFO: Deleting PersistentVolumeClaim "pvc-d47g5"
STEP: waiting for claim's PV "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" to be deleted
Sep  4 05:12:45.209: INFO: Waiting up to 10m0s for PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 to get deleted
Sep  4 05:12:45.435: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 found and phase=Released (226.118196ms)
Sep  4 05:12:50.539: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 found and phase=Failed (5.329445869s)
Sep  4 05:12:55.642: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 found and phase=Failed (10.433260038s)
Sep  4 05:13:00.750: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 found and phase=Failed (15.540482378s)
Sep  4 05:13:05.854: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 found and phase=Failed (20.644381745s)
Sep  4 05:13:10.957: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 found and phase=Failed (25.748338107s)
Sep  4 05:13:16.061: INFO: PersistentVolume pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 was removed
Sep  4 05:13:16.061: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  4 05:13:16.163: INFO: Claim "azuredisk-2546" in namespace "pvc-d47g5" doesn't exist in the system
Sep  4 05:13:16.164: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-6m479
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 05:13:16.584: INFO: deleting PVC "azuredisk-2546"/"pvc-gvd2w"
Sep  4 05:13:16.584: INFO: Deleting PersistentVolumeClaim "pvc-gvd2w"
STEP: waiting for claim's PV "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" to be deleted
Sep  4 05:13:16.689: INFO: Waiting up to 10m0s for PersistentVolume pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 to get deleted
Sep  4 05:13:16.796: INFO: PersistentVolume pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 found and phase=Failed (106.775926ms)
Sep  4 05:13:21.900: INFO: PersistentVolume pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 found and phase=Failed (5.21121291s)
Sep  4 05:13:27.008: INFO: PersistentVolume pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 found and phase=Failed (10.318920323s)
Sep  4 05:13:32.113: INFO: PersistentVolume pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 was removed
Sep  4 05:13:32.113: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  4 05:13:32.215: INFO: Claim "azuredisk-2546" in namespace "pvc-gvd2w" doesn't exist in the system
Sep  4 05:13:32.215: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-tc8mk
Sep  4 05:13:32.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2546" for this suite.
... skipping 85 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 05:13:37.504: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-587c4" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  4 05:13:37.614: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 109.977349ms
Sep  4 05:13:39.718: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214294914s
Sep  4 05:13:41.835: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331245191s
Sep  4 05:13:43.947: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443580304s
Sep  4 05:13:46.057: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553037663s
Sep  4 05:13:48.166: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.66260666s
... skipping 9 lines ...
Sep  4 05:14:09.278: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 31.774113091s
Sep  4 05:14:11.388: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 33.884400115s
Sep  4 05:14:13.498: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Pending", Reason="", readiness=false. Elapsed: 35.993980669s
Sep  4 05:14:15.608: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Running", Reason="", readiness=true. Elapsed: 38.103924826s
Sep  4 05:14:17.736: INFO: Pod "azuredisk-volume-tester-587c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.232670341s
STEP: Saw pod success
Sep  4 05:14:17.736: INFO: Pod "azuredisk-volume-tester-587c4" satisfied condition "Succeeded or Failed"
Sep  4 05:14:17.736: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-587c4"
Sep  4 05:14:17.952: INFO: Pod azuredisk-volume-tester-587c4 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-587c4 in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 05:14:18.436: INFO: deleting PVC "azuredisk-8582"/"pvc-lcfbw"
Sep  4 05:14:18.436: INFO: Deleting PersistentVolumeClaim "pvc-lcfbw"
STEP: waiting for claim's PV "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" to be deleted
Sep  4 05:14:18.543: INFO: Waiting up to 10m0s for PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd to get deleted
Sep  4 05:14:18.732: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Released (188.652621ms)
Sep  4 05:14:23.836: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (5.292593476s)
Sep  4 05:14:28.943: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (10.399991759s)
Sep  4 05:14:34.051: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (15.508497685s)
Sep  4 05:14:39.159: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (20.616102177s)
Sep  4 05:14:44.267: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (25.723733534s)
Sep  4 05:14:49.375: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (30.832308654s)
Sep  4 05:14:54.480: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (35.937453059s)
Sep  4 05:14:59.586: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (41.042700981s)
Sep  4 05:15:04.694: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (46.150730947s)
Sep  4 05:15:09.801: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd found and phase=Failed (51.258338267s)
Sep  4 05:15:14.905: INFO: PersistentVolume pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd was removed
Sep  4 05:15:14.905: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  4 05:15:15.009: INFO: Claim "azuredisk-8582" in namespace "pvc-lcfbw" doesn't exist in the system
Sep  4 05:15:15.009: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-h8n9r
STEP: validating provisioned PV
STEP: checking the PV
... skipping 407 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  4 05:19:25.866: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.982 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 248 lines ...
I0904 04:51:41.095504       1 tlsconfig.go:178] loaded client CA [1/"client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt"]: "kubernetes" [] issuer="<self>" (2022-09-04 04:44:43 +0000 UTC to 2032-09-01 04:49:43 +0000 UTC (now=2022-09-04 04:51:41.095486747 +0000 UTC))
I0904 04:51:41.095970       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662267099" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662267099" (2022-09-04 03:51:38 +0000 UTC to 2023-09-04 03:51:38 +0000 UTC (now=2022-09-04 04:51:41.095952254 +0000 UTC))
I0904 04:51:41.096364       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662267101" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662267099" (2022-09-04 03:51:39 +0000 UTC to 2023-09-04 03:51:39 +0000 UTC (now=2022-09-04 04:51:41.096346361 +0000 UTC))
I0904 04:51:41.096483       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0904 04:51:41.096574       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0904 04:51:41.097309       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0904 04:51:44.117102       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 04:51:44.117152       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0904 04:51:48.468668       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0904 04:51:48.470149       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-fvszkt-control-plane-wnbrq_c3cd9fe4-b3e8-4fde-8fcc-20fd27651ec7 became leader"
I0904 04:51:48.727814       1 request.go:600] Waited for 85.282021ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1?timeout=32s
I0904 04:51:48.778807       1 request.go:600] Waited for 136.285372ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0904 04:51:48.828492       1 request.go:600] Waited for 185.943899ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1?timeout=32s
I0904 04:51:48.878190       1 request.go:600] Waited for 235.622328ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1beta1?timeout=32s
... skipping 39 lines ...
I0904 04:51:49.186131       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0904 04:51:49.187168       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0904 04:51:49.187843       1 reflector.go:219] Starting reflector *v1.Secret (17h33m12.900433684s) from k8s.io/client-go/informers/factory.go:134
I0904 04:51:49.187875       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0904 04:51:49.190922       1 reflector.go:219] Starting reflector *v1.ServiceAccount (17h33m12.900433684s) from k8s.io/client-go/informers/factory.go:134
I0904 04:51:49.190951       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0904 04:51:49.231842       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 04:51:49.232086       1 controllermanager.go:559] Starting "deployment"
I0904 04:51:49.240031       1 controllermanager.go:574] Started "deployment"
I0904 04:51:49.240055       1 controllermanager.go:559] Starting "disruption"
I0904 04:51:49.240247       1 deployment_controller.go:153] "Starting controller" controller="deployment"
I0904 04:51:49.240463       1 shared_informer.go:240] Waiting for caches to sync for deployment
I0904 04:51:49.288042       1 shared_informer.go:270] caches populated
... skipping 62 lines ...
I0904 04:51:51.332256       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0904 04:51:51.332512       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0904 04:51:51.332713       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0904 04:51:51.332961       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0904 04:51:51.333213       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0904 04:51:51.333415       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0904 04:51:51.333681       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 04:51:51.333847       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0904 04:51:51.334111       1 controllermanager.go:574] Started "persistentvolume-binder"
I0904 04:51:51.334321       1 controllermanager.go:559] Starting "root-ca-cert-publisher"
I0904 04:51:51.334292       1 pv_controller_base.go:308] Starting persistent volume controller
I0904 04:51:51.335945       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0904 04:51:51.347234       1 controllermanager.go:574] Started "root-ca-cert-publisher"
... skipping 100 lines ...
I0904 04:51:53.043435       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0904 04:51:53.043461       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0904 04:51:53.043476       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0904 04:51:53.043491       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0904 04:51:53.043506       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0904 04:51:53.043518       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0904 04:51:53.043561       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 04:51:53.043580       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0904 04:51:53.043739       1 controllermanager.go:574] Started "attachdetach"
I0904 04:51:53.043768       1 controllermanager.go:559] Starting "resourcequota"
I0904 04:51:53.043814       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 04:51:53.043831       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0904 04:51:53.363537       1 resource_quota_monitor.go:177] QuotaMonitor using a shared informer for resource "apps/v1, Resource=replicasets"
... skipping 412 lines ...
I0904 04:51:55.984878       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (2.8µs)
I0904 04:51:55.984844       1 certificate_controller.go:82] Adding certificate request csr-qm5ht
I0904 04:51:55.984889       1 certificate_controller.go:82] Adding certificate request csr-qm5ht
I0904 04:51:55.984911       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (3.1µs)
I0904 04:51:55.984923       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (24.701µs)
I0904 04:51:56.207952       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (223.661938ms)
I0904 04:51:56.208263       1 certificate_controller.go:151] Sync csr-qm5ht failed with : recognized csr "csr-qm5ht" as [selfnodeclient nodeclient] but subject access review was not approved
I0904 04:51:56.412371       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (3.942764ms)
I0904 04:51:56.412409       1 certificate_controller.go:151] Sync csr-qm5ht failed with : recognized csr "csr-qm5ht" as [selfnodeclient nodeclient] but subject access review was not approved
I0904 04:51:56.817607       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (5.056082ms)
I0904 04:51:56.818222       1 certificate_controller.go:151] Sync csr-qm5ht failed with : recognized csr "csr-qm5ht" as [selfnodeclient nodeclient] but subject access review was not approved
I0904 04:51:56.830353       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-fvszkt-control-plane-wnbrq}
I0904 04:51:56.830395       1 taint_manager.go:440] "Updating known taints on node" node="capz-fvszkt-control-plane-wnbrq" taints=[]
I0904 04:51:56.830368       1 controller.go:693] Ignoring node capz-fvszkt-control-plane-wnbrq with Ready condition status False
I0904 04:51:56.830881       1 controller.go:272] Triggering nodeSync
I0904 04:51:56.831052       1 controller.go:291] nodeSync has been triggered
I0904 04:51:56.831223       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 04:51:56.831376       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 04:51:56.831551       1 controller.go:731] It took 0.000331206 seconds to finish nodeSyncInternal
I0904 04:51:56.830852       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-control-plane-wnbrq"
W0904 04:51:56.831882       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-fvszkt-control-plane-wnbrq" does not exist
I0904 04:51:56.873811       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-control-plane-wnbrq"
I0904 04:51:57.042248       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-control-plane-wnbrq"
I0904 04:51:57.042986       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-fvszkt-control-plane-wnbrq" new_ttl="0s"
I0904 04:51:57.191432       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-control-plane-wnbrq"
I0904 04:51:57.513097       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="92.502µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54212" resp=200
I0904 04:51:57.629549       1 certificate_controller.go:173] Finished syncing certificate request "csr-qm5ht" (10.251267ms)
I0904 04:51:57.629879       1 certificate_controller.go:151] Sync csr-qm5ht failed with : recognized csr "csr-qm5ht" as [selfnodeclient nodeclient] but subject access review was not approved
I0904 04:51:57.769463       1 daemon_controller.go:227] Adding daemon set kube-proxy-windows
I0904 04:51:57.779067       1 daemon_controller.go:395] ControllerRevision kube-proxy-windows-666f984cb4 added.
I0904 04:51:57.780717       1 controller_utils.go:206] Controller kube-system/kube-proxy-windows either never recorded expectations, or the ttl expired.
I0904 04:51:57.781123       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy-windows", timestamp:time.Time{wall:0xc0bd299b6e8ee217, ext:19574571334, loc:(*time.Location)(0x731ea80)}}
I0904 04:51:57.781414       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy-windows: [], creating 0
I0904 04:51:57.784663       1 daemon_controller.go:1030] Pods to delete for daemon set kube-proxy-windows: [], deleting 0
... skipping 61 lines ...
I0904 04:51:59.124918       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0904 04:51:59.129019       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-04 04:51:59.07908107 +0000 UTC m=+20.872536477 - now: 2022-09-04 04:51:59.129011378 +0000 UTC m=+20.922466785]
I0904 04:51:59.144727       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0904 04:51:59.145193       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 04:51:59.10253315 +0000 UTC m=+20.895988557 - now: 2022-09-04 04:51:59.14518524 +0000 UTC m=+20.938640547]
I0904 04:51:59.201405       1 daemon_controller.go:227] Adding daemon set kube-proxy
I0904 04:51:59.371147       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="362.233863ms"
I0904 04:51:59.371195       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0904 04:51:59.371242       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-04 04:51:59.371219499 +0000 UTC m=+21.164674806"
I0904 04:51:59.372147       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-04 04:51:59 +0000 UTC - now: 2022-09-04 04:51:59.372138813 +0000 UTC m=+21.165594220]
I0904 04:51:59.400685       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="391.047429ms"
I0904 04:51:59.400770       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 04:51:59.400814       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 04:51:59.400792877 +0000 UTC m=+21.194248184"
I0904 04:51:59.402609       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 04:51:59 +0000 UTC - now: 2022-09-04 04:51:59.402601506 +0000 UTC m=+21.196056913]
I0904 04:51:59.416842       1 daemon_controller.go:395] ControllerRevision kube-proxy-76c9478db6 added.
I0904 04:51:59.427525       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0904 04:51:59.427561       1 certificate_controller.go:87] Updating certificate request csr-qm5ht
I0904 04:51:59.427599       1 certificate_controller.go:87] Updating certificate request csr-qm5ht
... skipping 196 lines ...
I0904 04:52:03.512126       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 04:52:03.499636454 +0000 UTC m=+25.293091861 - now: 2022-09-04 04:52:03.512118066 +0000 UTC m=+25.305573373]
I0904 04:52:03.520981       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-969cf87c4"
I0904 04:52:03.521780       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (24.765112ms)
I0904 04:52:03.522245       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd299cdda1f2b4, ext:25290608099, loc:(*time.Location)(0x731ea80)}}
I0904 04:52:03.522404       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 04:52:03.527298       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="38.953235ms"
I0904 04:52:03.527356       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 04:52:03.527412       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 04:52:03.52738966 +0000 UTC m=+25.320844967"
I0904 04:52:03.527681       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-zfj5n"
I0904 04:52:03.527882       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-969cf87c4-zfj5n, PodDisruptionBudget controller will avoid syncing.
I0904 04:52:03.528076       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-969cf87c4-zfj5n"
I0904 04:52:03.528247       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 04:52:03 +0000 UTC - now: 2022-09-04 04:52:03.528240215 +0000 UTC m=+25.321695622]
I0904 04:52:03.528633       1 replica_set.go:439] Pod calico-kube-controllers-969cf87c4-zfj5n updated, objectMeta {Name:calico-kube-controllers-969cf87c4-zfj5n GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:30157c21-1061-43dc-badd-e83e410af790 ResourceVersion:546 Generation:0 CreationTimestamp:2022-09-04 04:52:03 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:0cc5ca72-5a64-4899-9411-715d08ac5a12 Controller:0xc0021e5417 BlockOwnerDeletion:0xc0021e5418}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 04:52:03 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cc5ca72-5a64-4899-9411-715d08ac5a12\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}}]} -> {Name:calico-kube-controllers-969cf87c4-zfj5n GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:30157c21-1061-43dc-badd-e83e410af790 ResourceVersion:552 Generation:0 CreationTimestamp:2022-09-04 04:52:03 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:0cc5ca72-5a64-4899-9411-715d08ac5a12 Controller:0xc0021e5b57 BlockOwnerDeletion:0xc0021e5b58}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 04:52:03 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0cc5ca72-5a64-4899-9411-715d08ac5a12\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 04:52:03 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]}.
... skipping 315 lines ...
I0904 04:52:23.903939       1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 04:52:23.904143       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (21h25m59.584090935s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 04:52:23.904331       1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 04:52:24.004095       1 shared_informer.go:270] caches populated
I0904 04:52:24.004127       1 shared_informer.go:247] Caches are synced for resource quota 
I0904 04:52:24.004138       1 resource_quota_controller.go:454] synced quota controller
W0904 04:52:24.333046       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 04:52:24.333522       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0904 04:52:24.333558       1 garbagecollector.go:219] reset restmapper
E0904 04:52:24.356358       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0904 04:52:24.359273       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0904 04:52:24.361949       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=ippools", kind "crd.projectcalico.org/v1, Kind=IPPool"
I0904 04:52:24.362141       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=hostendpoints", kind "crd.projectcalico.org/v1, Kind=HostEndpoint"
... skipping 95 lines ...
I0904 04:52:35.024500       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd29a4c175c278, ext:56817950119, loc:(*time.Location)(0x731ea80)}}
I0904 04:52:35.024694       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0904 04:52:35.024906       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0904 04:52:35.025039       1 daemon_controller.go:1103] Updating daemon set status
I0904 04:52:35.025241       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (3.963842ms)
I0904 04:52:35.163142       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-control-plane-wnbrq"
I0904 04:52:38.646755       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-fvszkt-control-plane-wnbrq transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 04:52:14 +0000 UTC,LastTransitionTime:2022-09-04 04:51:29 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 04:52:34 +0000 UTC,LastTransitionTime:2022-09-04 04:52:34 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 04:52:38.646868       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-control-plane-wnbrq ReadyCondition updated. Updating timestamp.
I0904 04:52:38.646899       1 node_lifecycle_controller.go:893] Node capz-fvszkt-control-plane-wnbrq is healthy again, removing all taints
I0904 04:52:38.646936       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 04:52:38.667109       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:52:38.838275       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:52:39.988994       1 disruption.go:427] updatePod called on pod "calico-node-r5xbn"
... skipping 194 lines ...
I0904 04:52:54.126024       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bd299cdda1f2b4, ext:25290608099, loc:(*time.Location)(0x731ea80)}}
I0904 04:52:54.126282       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (264.503µs)
I0904 04:52:54.132909       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0904 04:52:54.133616       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="15.417969ms"
I0904 04:52:54.133788       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 04:52:54.133650338 +0000 UTC m=+75.927105645"
I0904 04:52:54.134660       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="972.111µs"
W0904 04:52:55.262214       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 04:52:58.981512       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-fsf96"
I0904 04:52:58.981834       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-fsf96, PodDisruptionBudget controller will avoid syncing.
I0904 04:52:58.981948       1 disruption.go:430] No matching pdb for pod "metrics-server-8c95fb79b-fsf96"
I0904 04:52:58.983171       1 endpoints_controller.go:555] Update endpoints for kube-system/metrics-server, ready: 0 not ready: 1
I0904 04:52:58.984148       1 replica_set.go:439] Pod metrics-server-8c95fb79b-fsf96 updated, objectMeta {Name:metrics-server-8c95fb79b-fsf96 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:d8ff30fa-ae8d-45cf-9164-71c0ef6f36f7 ResourceVersion:732 Generation:0 CreationTimestamp:2022-09-04 04:51:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:cb22cd8fe2054394ef8862cd4fde4b94b6a2efbe946d72eeb39f636fa9b4626a cni.projectcalico.org/podIP:192.168.106.194/32 cni.projectcalico.org/podIPs:192.168.106.194/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:09a118f7-525d-4bcb-b356-1e257cca3b9c Controller:0xc002299887 BlockOwnerDeletion:0xc002299888}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"09a118f7-525d-4bcb-b356-1e257cca3b9c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 04:52:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 04:52:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}}]} -> {Name:metrics-server-8c95fb79b-fsf96 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:d8ff30fa-ae8d-45cf-9164-71c0ef6f36f7 ResourceVersion:790 Generation:0 CreationTimestamp:2022-09-04 04:51:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:cb22cd8fe2054394ef8862cd4fde4b94b6a2efbe946d72eeb39f636fa9b4626a cni.projectcalico.org/podIP:192.168.106.194/32 cni.projectcalico.org/podIPs:192.168.106.194/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:09a118f7-525d-4bcb-b356-1e257cca3b9c Controller:0xc0008063d7 BlockOwnerDeletion:0xc0008063d8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"09a118f7-525d-4bcb-b356-1e257cca3b9c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 04:52:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 04:52:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.106.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]}.
I0904 04:52:58.984710       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bd299bc4ad9325, ext:20871939668, loc:(*time.Location)(0x731ea80)}}
... skipping 16 lines ...
I0904 04:53:21.322707       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="65.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53758" resp=200
I0904 04:53:23.594869       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:53:23.669051       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:53:23.840488       1 pv_controller_base.go:528] resyncing PV controller
E0904 04:53:24.070534       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0904 04:53:24.071018       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
W0904 04:53:25.309030       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 04:53:25.508236       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-fsf96"
I0904 04:53:25.508443       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-fsf96, PodDisruptionBudget controller will avoid syncing.
I0904 04:53:25.508488       1 disruption.go:430] No matching pdb for pod "metrics-server-8c95fb79b-fsf96"
I0904 04:53:25.508951       1 endpoints_controller.go:555] Update endpoints for kube-system/metrics-server, ready: 1 not ready: 0
I0904 04:53:25.509448       1 replica_set.go:439] Pod metrics-server-8c95fb79b-fsf96 updated, objectMeta {Name:metrics-server-8c95fb79b-fsf96 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:d8ff30fa-ae8d-45cf-9164-71c0ef6f36f7 ResourceVersion:790 Generation:0 CreationTimestamp:2022-09-04 04:51:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:cb22cd8fe2054394ef8862cd4fde4b94b6a2efbe946d72eeb39f636fa9b4626a cni.projectcalico.org/podIP:192.168.106.194/32 cni.projectcalico.org/podIPs:192.168.106.194/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:09a118f7-525d-4bcb-b356-1e257cca3b9c Controller:0xc0008063d7 BlockOwnerDeletion:0xc0008063d8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"09a118f7-525d-4bcb-b356-1e257cca3b9c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 04:52:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 04:52:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.106.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]} -> {Name:metrics-server-8c95fb79b-fsf96 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:d8ff30fa-ae8d-45cf-9164-71c0ef6f36f7 ResourceVersion:825 Generation:0 CreationTimestamp:2022-09-04 04:51:59 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:cb22cd8fe2054394ef8862cd4fde4b94b6a2efbe946d72eeb39f636fa9b4626a cni.projectcalico.org/podIP:192.168.106.194/32 cni.projectcalico.org/podIPs:192.168.106.194/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:09a118f7-525d-4bcb-b356-1e257cca3b9c Controller:0xc001f67ac0 BlockOwnerDeletion:0xc001f67ac1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"09a118f7-525d-4bcb-b356-1e257cca3b9c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 04:51:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 04:52:46 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 04:53:25 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.106.194\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]}.
I0904 04:53:25.515681       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bd299bc4ad9325, ext:20871939668, loc:(*time.Location)(0x731ea80)}}
... skipping 65 lines ...
I0904 04:53:39.595352       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-fvszkt-md-0-jvr4s], creating 1
I0904 04:53:39.594630       1 controller.go:272] Triggering nodeSync
I0904 04:53:39.594638       1 controller.go:291] nodeSync has been triggered
I0904 04:53:39.595763       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 04:53:39.595813       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 04:53:39.595838       1 controller.go:731] It took 7.9001e-05 seconds to finish nodeSyncInternal
W0904 04:53:39.594744       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-fvszkt-md-0-jvr4s" does not exist
I0904 04:53:39.620015       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-fvszkt-md-0-jvr4s" new_ttl="0s"
I0904 04:53:39.620600       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 04:53:39.642953       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 04:53:39.652539       1 disruption.go:415] addPod called on pod "kube-proxy-n6gwt"
I0904 04:53:39.652761       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-n6gwt, PodDisruptionBudget controller will avoid syncing.
I0904 04:53:39.652944       1 disruption.go:418] No matching pdb for pod "kube-proxy-n6gwt"
... skipping 177 lines ...
I0904 04:53:49.540061       1 controller.go:272] Triggering nodeSync
I0904 04:53:49.540245       1 controller.go:291] nodeSync has been triggered
I0904 04:53:49.540488       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 04:53:49.540667       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 04:53:49.540991       1 controller.go:731] It took 0.000505308 seconds to finish nodeSyncInternal
I0904 04:53:49.541148       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
W0904 04:53:49.541271       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-fvszkt-md-0-tjdcv" does not exist
I0904 04:53:49.543673       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd29b58645e953, ext:123898700418, loc:(*time.Location)(0x731ea80)}}
I0904 04:53:49.549128       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd29b760bae786, ext:131342575285, loc:(*time.Location)(0x731ea80)}}
I0904 04:53:49.549587       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-fvszkt-md-0-tjdcv], creating 1
I0904 04:53:49.583687       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:53:49.584171       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-fvszkt-md-0-tjdcv" new_ttl="0s"
I0904 04:53:49.586380       1 controller_utils.go:591] Controller kube-proxy created pod kube-proxy-wt8f6
... skipping 337 lines ...
I0904 04:54:12.818918       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd29bd30cda5e1, ext:154612239120, loc:(*time.Location)(0x731ea80)}}
I0904 04:54:12.819032       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd29bd30d15ea0, ext:154612482923, loc:(*time.Location)(0x731ea80)}}
I0904 04:54:12.819063       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0904 04:54:12.819116       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0904 04:54:12.819156       1 daemon_controller.go:1103] Updating daemon set status
I0904 04:54:12.819239       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (2.670534ms)
I0904 04:54:13.661524       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-fvszkt-md-0-jvr4s transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 04:53:49 +0000 UTC,LastTransitionTime:2022-09-04 04:53:39 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 04:54:09 +0000 UTC,LastTransitionTime:2022-09-04 04:54:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 04:54:13.661675       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-jvr4s ReadyCondition updated. Updating timestamp.
I0904 04:54:13.667666       1 gc_controller.go:161] GC'ing orphaned
I0904 04:54:13.667694       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 04:54:13.680012       1 node_lifecycle_controller.go:893] Node capz-fvszkt-md-0-jvr4s is healthy again, removing all taints
I0904 04:54:13.680889       1 node_lifecycle_controller.go:1214] Controller detected that zone uksouth::0 is now in state Normal.
I0904 04:54:13.682459       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-fvszkt-md-0-jvr4s}
... skipping 60 lines ...
I0904 04:54:21.836775       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (13.194515ms)
I0904 04:54:21.896391       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:54:21.936343       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:54:22.146674       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:54:23.596741       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:54:23.671632       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:54:23.682964       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-fvszkt-md-0-tjdcv transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 04:53:59 +0000 UTC,LastTransitionTime:2022-09-04 04:53:49 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 04:54:19 +0000 UTC,LastTransitionTime:2022-09-04 04:54:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 04:54:23.683046       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-tjdcv ReadyCondition updated. Updating timestamp.
I0904 04:54:23.702913       1 node_lifecycle_controller.go:893] Node capz-fvszkt-md-0-tjdcv is healthy again, removing all taints
I0904 04:54:23.703297       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:54:23.703924       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-fvszkt-md-0-tjdcv}
I0904 04:54:23.704393       1 taint_manager.go:440] "Updating known taints on node" node="capz-fvszkt-md-0-tjdcv" taints=[]
I0904 04:54:23.705013       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-fvszkt-md-0-tjdcv"
... skipping 336 lines ...
I0904 04:57:37.122644       1 pv_controller.go:1108] reclaimVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: policy is Delete
I0904 04:57:37.122674       1 pv_controller.go:1753] scheduleOperation[delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]]
I0904 04:57:37.122760       1 pv_controller.go:1764] operation "delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]" is already running, skipping
I0904 04:57:37.122812       1 pv_protection_controller.go:205] Got event on PV pvc-643af74e-feff-4184-a1b2-4748a80ef477
I0904 04:57:37.172373       1 pv_controller.go:1341] isVolumeReleased[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is released
I0904 04:57:37.172399       1 pv_controller.go:1405] doDeleteVolume [pvc-643af74e-feff-4184-a1b2-4748a80ef477]
I0904 04:57:37.207536       1 pv_controller.go:1260] deletion of volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 04:57:37.207569       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: set phase Failed
I0904 04:57:37.207583       1 pv_controller.go:858] updating PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: set phase Failed
I0904 04:57:37.214541       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" with version 1356
I0904 04:57:37.214694       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase: Failed, bound to: "azuredisk-8081/pvc-kh7dr (uid: 643af74e-feff-4184-a1b2-4748a80ef477)", boundByController: true
I0904 04:57:37.215003       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is bound to claim azuredisk-8081/pvc-kh7dr
I0904 04:57:37.215267       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: claim azuredisk-8081/pvc-kh7dr not found
I0904 04:57:37.215566       1 pv_controller.go:1108] reclaimVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: policy is Delete
I0904 04:57:37.215797       1 pv_controller.go:1753] scheduleOperation[delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]]
I0904 04:57:37.215906       1 pv_controller.go:1764] operation "delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]" is already running, skipping
I0904 04:57:37.215273       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" with version 1356
I0904 04:57:37.215940       1 pv_controller.go:879] volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" entered phase "Failed"
I0904 04:57:37.214899       1 pv_protection_controller.go:205] Got event on PV pvc-643af74e-feff-4184-a1b2-4748a80ef477
I0904 04:57:37.216142       1 pv_controller.go:901] volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
E0904 04:57:37.216551       1 goroutinemap.go:150] Operation for "delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]" failed. No retries permitted until 2022-09-04 04:57:37.716504014 +0000 UTC m=+359.509959421 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 04:57:37.217031       1 event.go:291] "Event occurred" object="pvc-643af74e-feff-4184-a1b2-4748a80ef477" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 04:57:38.595229       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 7 items received
I0904 04:57:38.680218       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:57:38.851534       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:57:38.851615       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" with version 1356
I0904 04:57:38.851674       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase: Failed, bound to: "azuredisk-8081/pvc-kh7dr (uid: 643af74e-feff-4184-a1b2-4748a80ef477)", boundByController: true
I0904 04:57:38.851710       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is bound to claim azuredisk-8081/pvc-kh7dr
I0904 04:57:38.851737       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: claim azuredisk-8081/pvc-kh7dr not found
I0904 04:57:38.851747       1 pv_controller.go:1108] reclaimVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: policy is Delete
I0904 04:57:38.851765       1 pv_controller.go:1753] scheduleOperation[delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]]
I0904 04:57:38.851811       1 pv_controller.go:1232] deleteVolumeOperation [pvc-643af74e-feff-4184-a1b2-4748a80ef477] started
I0904 04:57:38.865625       1 pv_controller.go:1341] isVolumeReleased[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is released
I0904 04:57:38.865650       1 pv_controller.go:1405] doDeleteVolume [pvc-643af74e-feff-4184-a1b2-4748a80ef477]
I0904 04:57:38.899056       1 pv_controller.go:1260] deletion of volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 04:57:38.899087       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: set phase Failed
I0904 04:57:38.899100       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase Failed already set
E0904 04:57:38.899155       1 goroutinemap.go:150] Operation for "delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]" failed. No retries permitted until 2022-09-04 04:57:39.899110272 +0000 UTC m=+361.692565579 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 04:57:39.931968       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:57:39.934952       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477 to the node "capz-fvszkt-md-0-tjdcv" mounted false
I0904 04:57:39.981265       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 04:57:39.981303       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477 to the node "capz-fvszkt-md-0-tjdcv" mounted false
I0904 04:57:39.981456       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-fvszkt-md-0-tjdcv" succeeded. VolumesAttached: []
I0904 04:57:39.981527       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477") on node "capz-fvszkt-md-0-tjdcv" 
... skipping 14 lines ...
I0904 04:57:53.675155       1 gc_controller.go:161] GC'ing orphaned
I0904 04:57:53.675189       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 04:57:53.681298       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:57:53.739338       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-tjdcv ReadyCondition updated. Updating timestamp.
I0904 04:57:53.851956       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:57:53.852012       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" with version 1356
I0904 04:57:53.852050       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase: Failed, bound to: "azuredisk-8081/pvc-kh7dr (uid: 643af74e-feff-4184-a1b2-4748a80ef477)", boundByController: true
I0904 04:57:53.852089       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is bound to claim azuredisk-8081/pvc-kh7dr
I0904 04:57:53.852112       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: claim azuredisk-8081/pvc-kh7dr not found
I0904 04:57:53.852121       1 pv_controller.go:1108] reclaimVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: policy is Delete
I0904 04:57:53.852141       1 pv_controller.go:1753] scheduleOperation[delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]]
I0904 04:57:53.852178       1 pv_controller.go:1232] deleteVolumeOperation [pvc-643af74e-feff-4184-a1b2-4748a80ef477] started
I0904 04:57:53.863907       1 pv_controller.go:1341] isVolumeReleased[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is released
I0904 04:57:53.863930       1 pv_controller.go:1405] doDeleteVolume [pvc-643af74e-feff-4184-a1b2-4748a80ef477]
I0904 04:57:53.863969       1 pv_controller.go:1260] deletion of volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) since it's in attaching or detaching state
I0904 04:57:53.863989       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: set phase Failed
I0904 04:57:53.864001       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase Failed already set
E0904 04:57:53.864043       1 goroutinemap.go:150] Operation for "delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]" failed. No retries permitted until 2022-09-04 04:57:55.864010275 +0000 UTC m=+377.657465682 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) since it's in attaching or detaching state"
I0904 04:57:53.955558       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 04:57:54.327360       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 04:57:54.604286       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.Ingress total 0 items received
I0904 04:57:57.802516       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0904 04:58:00.407431       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-tjdcv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) returned with <nil>
I0904 04:58:00.407474       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477) succeeded
... skipping 3 lines ...
I0904 04:58:04.851955       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 04:58:05.129151       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-control-plane-wnbrq"
I0904 04:58:08.681480       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:58:08.741490       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-control-plane-wnbrq ReadyCondition updated. Updating timestamp.
I0904 04:58:08.852378       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:58:08.852507       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" with version 1356
I0904 04:58:08.852572       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase: Failed, bound to: "azuredisk-8081/pvc-kh7dr (uid: 643af74e-feff-4184-a1b2-4748a80ef477)", boundByController: true
I0904 04:58:08.852646       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is bound to claim azuredisk-8081/pvc-kh7dr
I0904 04:58:08.852687       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: claim azuredisk-8081/pvc-kh7dr not found
I0904 04:58:08.852698       1 pv_controller.go:1108] reclaimVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: policy is Delete
I0904 04:58:08.852717       1 pv_controller.go:1753] scheduleOperation[delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]]
I0904 04:58:08.852815       1 pv_controller.go:1232] deleteVolumeOperation [pvc-643af74e-feff-4184-a1b2-4748a80ef477] started
I0904 04:58:08.860617       1 pv_controller.go:1341] isVolumeReleased[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is released
... skipping 2 lines ...
I0904 04:58:13.675345       1 gc_controller.go:161] GC'ing orphaned
I0904 04:58:13.675380       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 04:58:14.057701       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-643af74e-feff-4184-a1b2-4748a80ef477
I0904 04:58:14.057809       1 pv_controller.go:1436] volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" deleted
I0904 04:58:14.057830       1 pv_controller.go:1284] deleteVolumeOperation [pvc-643af74e-feff-4184-a1b2-4748a80ef477]: success
I0904 04:58:14.075562       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-643af74e-feff-4184-a1b2-4748a80ef477" with version 1411
I0904 04:58:14.075617       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: phase: Failed, bound to: "azuredisk-8081/pvc-kh7dr (uid: 643af74e-feff-4184-a1b2-4748a80ef477)", boundByController: true
I0904 04:58:14.075655       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: volume is bound to claim azuredisk-8081/pvc-kh7dr
I0904 04:58:14.075679       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: claim azuredisk-8081/pvc-kh7dr not found
I0904 04:58:14.075694       1 pv_controller.go:1108] reclaimVolume[pvc-643af74e-feff-4184-a1b2-4748a80ef477]: policy is Delete
I0904 04:58:14.075713       1 pv_controller.go:1753] scheduleOperation[delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]]
I0904 04:58:14.075724       1 pv_controller.go:1764] operation "delete-pvc-643af74e-feff-4184-a1b2-4748a80ef477[0fd4af21-e403-43d2-88fe-90c69990c965]" is already running, skipping
I0904 04:58:14.075743       1 pv_protection_controller.go:205] Got event on PV pvc-643af74e-feff-4184-a1b2-4748a80ef477
... skipping 75 lines ...
I0904 04:58:24.354465       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 04:58:24.588138       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0904 04:58:25.087620       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0904 04:58:25.118090       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid 806dfcc3-7b4c-429e-ba2a-126861ebcffe, event type delete
I0904 04:58:25.120485       1 publisher.go:181] Finished syncing namespace "azuredisk-2540" (2.476637ms)
I0904 04:58:25.149122       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-bhcb9, uid 634897c5-5dd2-490f-968b-d3ff7d14a659, event type delete
E0904 04:58:25.165237       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-mlpvh" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0904 04:58:25.180617       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0904 04:58:25.180788       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.9µs)
I0904 04:58:25.181005       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid aaf6340f-25b0-4c5c-af95-1e97c69c9785, event type delete
I0904 04:58:25.300981       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3.5µs)
I0904 04:58:25.303092       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0904 04:58:25.325944       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (243.063479ms)
... skipping 200 lines ...
I0904 04:58:39.763800       1 pv_controller.go:1108] reclaimVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: policy is Delete
I0904 04:58:39.763904       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220] started
I0904 04:58:39.764157       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]]
I0904 04:58:39.764186       1 pv_controller.go:1764] operation "delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]" is already running, skipping
I0904 04:58:39.766136       1 pv_controller.go:1341] isVolumeReleased[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is released
I0904 04:58:39.766160       1 pv_controller.go:1405] doDeleteVolume [pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]
I0904 04:58:39.803591       1 pv_controller.go:1260] deletion of volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 04:58:39.803620       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: set phase Failed
I0904 04:58:39.803632       1 pv_controller.go:858] updating PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: set phase Failed
I0904 04:58:39.810460       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" with version 1518
I0904 04:58:39.810690       1 pv_controller.go:879] volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" entered phase "Failed"
I0904 04:58:39.810713       1 pv_controller.go:901] volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
E0904 04:58:39.810777       1 goroutinemap.go:150] Operation for "delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]" failed. No retries permitted until 2022-09-04 04:58:40.310738712 +0000 UTC m=+422.104194119 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 04:58:39.810511       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" with version 1518
I0904 04:58:39.811070       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: phase: Failed, bound to: "azuredisk-5466/pvc-njkf6 (uid: 2bb8c0b9-7a84-41c4-a41b-29671c24d220)", boundByController: true
I0904 04:58:39.811102       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is bound to claim azuredisk-5466/pvc-njkf6
I0904 04:58:39.811127       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: claim azuredisk-5466/pvc-njkf6 not found
I0904 04:58:39.811138       1 pv_controller.go:1108] reclaimVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: policy is Delete
I0904 04:58:39.811156       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]]
I0904 04:58:39.811166       1 pv_controller.go:1766] operation "delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]" postponed due to exponential backoff
I0904 04:58:39.810552       1 pv_protection_controller.go:205] Got event on PV pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220
... skipping 18 lines ...
I0904 04:58:53.676788       1 gc_controller.go:161] GC'ing orphaned
I0904 04:58:53.676821       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 04:58:53.683225       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:58:53.751459       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-tjdcv ReadyCondition updated. Updating timestamp.
I0904 04:58:53.854049       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:58:53.854118       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" with version 1518
I0904 04:58:53.854182       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: phase: Failed, bound to: "azuredisk-5466/pvc-njkf6 (uid: 2bb8c0b9-7a84-41c4-a41b-29671c24d220)", boundByController: true
I0904 04:58:53.854222       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is bound to claim azuredisk-5466/pvc-njkf6
I0904 04:58:53.854243       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: claim azuredisk-5466/pvc-njkf6 not found
I0904 04:58:53.854257       1 pv_controller.go:1108] reclaimVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: policy is Delete
I0904 04:58:53.854274       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]]
I0904 04:58:53.854305       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220] started
I0904 04:58:53.864815       1 pv_controller.go:1341] isVolumeReleased[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is released
I0904 04:58:53.864835       1 pv_controller.go:1405] doDeleteVolume [pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]
I0904 04:58:53.864871       1 pv_controller.go:1260] deletion of volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) since it's in attaching or detaching state
I0904 04:58:53.864884       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: set phase Failed
I0904 04:58:53.864893       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: phase Failed already set
E0904 04:58:53.864943       1 goroutinemap.go:150] Operation for "delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]" failed. No retries permitted until 2022-09-04 04:58:54.86490062 +0000 UTC m=+436.658355927 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) since it's in attaching or detaching state"
I0904 04:58:54.381225       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 04:59:01.324712       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="67.901µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54750" resp=200
I0904 04:59:05.478954       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-tjdcv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) returned with <nil>
I0904 04:59:05.479010       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220) succeeded
I0904 04:59:05.479070       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220 was detached from node:capz-fvszkt-md-0-tjdcv
I0904 04:59:05.479154       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220") on node "capz-fvszkt-md-0-tjdcv" 
I0904 04:59:05.580932       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 33 items received
I0904 04:59:05.667394       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 0 items received
I0904 04:59:08.683504       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:59:08.854600       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:59:08.854869       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" with version 1518
I0904 04:59:08.854928       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: phase: Failed, bound to: "azuredisk-5466/pvc-njkf6 (uid: 2bb8c0b9-7a84-41c4-a41b-29671c24d220)", boundByController: true
I0904 04:59:08.854974       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is bound to claim azuredisk-5466/pvc-njkf6
I0904 04:59:08.855009       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: claim azuredisk-5466/pvc-njkf6 not found
I0904 04:59:08.855027       1 pv_controller.go:1108] reclaimVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: policy is Delete
I0904 04:59:08.855051       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]]
I0904 04:59:08.855096       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220] started
I0904 04:59:08.863434       1 pv_controller.go:1341] isVolumeReleased[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is released
... skipping 2 lines ...
I0904 04:59:13.677640       1 gc_controller.go:161] GC'ing orphaned
I0904 04:59:13.677692       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 04:59:14.099393       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220
I0904 04:59:14.099433       1 pv_controller.go:1436] volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" deleted
I0904 04:59:14.099448       1 pv_controller.go:1284] deleteVolumeOperation [pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: success
I0904 04:59:14.116846       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220" with version 1569
I0904 04:59:14.116958       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: phase: Failed, bound to: "azuredisk-5466/pvc-njkf6 (uid: 2bb8c0b9-7a84-41c4-a41b-29671c24d220)", boundByController: true
I0904 04:59:14.117051       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: volume is bound to claim azuredisk-5466/pvc-njkf6
I0904 04:59:14.117125       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: claim azuredisk-5466/pvc-njkf6 not found
I0904 04:59:14.117146       1 pv_controller.go:1108] reclaimVolume[pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220]: policy is Delete
I0904 04:59:14.117168       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]]
I0904 04:59:14.117234       1 pv_controller.go:1764] operation "delete-pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220[9b6a1143-d246-4615-a57a-3dd3375c7f21]" is already running, skipping
I0904 04:59:14.117263       1 pv_protection_controller.go:205] Got event on PV pvc-2bb8c0b9-7a84-41c4-a41b-29671c24d220
... skipping 249 lines ...
I0904 04:59:38.934244       1 pv_controller.go:1108] reclaimVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: policy is Delete
I0904 04:59:38.934256       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]]
I0904 04:59:38.934264       1 pv_controller.go:1764] operation "delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]" is already running, skipping
I0904 04:59:38.934291       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8] started
I0904 04:59:38.939175       1 pv_controller.go:1341] isVolumeReleased[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is released
I0904 04:59:38.939397       1 pv_controller.go:1405] doDeleteVolume [pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]
I0904 04:59:38.978475       1 pv_controller.go:1260] deletion of volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 04:59:38.978507       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: set phase Failed
I0904 04:59:38.978519       1 pv_controller.go:858] updating PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: set phase Failed
I0904 04:59:38.982926       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" with version 1662
I0904 04:59:38.983216       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: phase: Failed, bound to: "azuredisk-2790/pvc-rtf9g (uid: 9d23cae1-6da4-46dd-8204-a5861c44f2c8)", boundByController: true
I0904 04:59:38.983648       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" with version 1662
I0904 04:59:38.983949       1 pv_controller.go:879] volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" entered phase "Failed"
I0904 04:59:38.983133       1 pv_protection_controller.go:205] Got event on PV pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8
I0904 04:59:38.983870       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is bound to claim azuredisk-2790/pvc-rtf9g
I0904 04:59:38.984627       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: claim azuredisk-2790/pvc-rtf9g not found
I0904 04:59:38.984669       1 pv_controller.go:1108] reclaimVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: policy is Delete
I0904 04:59:38.984846       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]]
I0904 04:59:38.984943       1 pv_controller.go:1764] operation "delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]" is already running, skipping
I0904 04:59:38.984981       1 pv_controller.go:901] volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 04:59:38.985587       1 event.go:291] "Event occurred" object="pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
E0904 04:59:38.985829       1 goroutinemap.go:150] Operation for "delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]" failed. No retries permitted until 2022-09-04 04:59:39.485627833 +0000 UTC m=+481.279083240 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 04:59:40.097254       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 04:59:40.097343       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 to the node "capz-fvszkt-md-0-jvr4s" mounted false
I0904 04:59:40.161495       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-fvszkt-md-0-jvr4s" succeeded. VolumesAttached: []
I0904 04:59:40.164439       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8") on node "capz-fvszkt-md-0-jvr4s" 
I0904 04:59:40.167477       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 04:59:40.167693       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 to the node "capz-fvszkt-md-0-jvr4s" mounted false
... skipping 9 lines ...
I0904 04:59:53.603680       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:59:53.678862       1 gc_controller.go:161] GC'ing orphaned
I0904 04:59:53.678895       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 04:59:53.684471       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 04:59:53.856924       1 pv_controller_base.go:528] resyncing PV controller
I0904 04:59:53.857006       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" with version 1662
I0904 04:59:53.857300       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: phase: Failed, bound to: "azuredisk-2790/pvc-rtf9g (uid: 9d23cae1-6da4-46dd-8204-a5861c44f2c8)", boundByController: true
I0904 04:59:53.857345       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is bound to claim azuredisk-2790/pvc-rtf9g
I0904 04:59:53.857369       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: claim azuredisk-2790/pvc-rtf9g not found
I0904 04:59:53.857386       1 pv_controller.go:1108] reclaimVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: policy is Delete
I0904 04:59:53.857405       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]]
I0904 04:59:53.857449       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8] started
I0904 04:59:53.873434       1 pv_controller.go:1341] isVolumeReleased[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is released
I0904 04:59:53.873458       1 pv_controller.go:1405] doDeleteVolume [pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]
I0904 04:59:53.873625       1 pv_controller.go:1260] deletion of volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) since it's in attaching or detaching state
I0904 04:59:53.873646       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: set phase Failed
I0904 04:59:53.873658       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: phase Failed already set
E0904 04:59:53.873836       1 goroutinemap.go:150] Operation for "delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]" failed. No retries permitted until 2022-09-04 04:59:54.873788766 +0000 UTC m=+496.667244073 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) since it's in attaching or detaching state"
I0904 04:59:54.246098       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0904 04:59:54.439587       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 04:59:55.528700       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) returned with <nil>
I0904 04:59:55.528753       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8) succeeded
I0904 04:59:55.528794       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8 was detached from node:capz-fvszkt-md-0-jvr4s
I0904 04:59:55.528827       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8") on node "capz-fvszkt-md-0-jvr4s" 
I0904 04:59:57.701676       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 05:00:00.973438       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 21 items received
I0904 05:00:01.324657       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="112.301µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44590" resp=200
I0904 05:00:04.741639       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 05:00:08.684652       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:00:08.857859       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:00:08.857941       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" with version 1662
I0904 05:00:08.858072       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: phase: Failed, bound to: "azuredisk-2790/pvc-rtf9g (uid: 9d23cae1-6da4-46dd-8204-a5861c44f2c8)", boundByController: true
I0904 05:00:08.858197       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is bound to claim azuredisk-2790/pvc-rtf9g
I0904 05:00:08.858310       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: claim azuredisk-2790/pvc-rtf9g not found
I0904 05:00:08.858398       1 pv_controller.go:1108] reclaimVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: policy is Delete
I0904 05:00:08.858506       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]]
I0904 05:00:08.858648       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8] started
I0904 05:00:08.865268       1 pv_controller.go:1341] isVolumeReleased[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is released
... skipping 9 lines ...
I0904 05:00:13.708962       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 05:00:13.708970       1 controller.go:731] It took 2.1501e-05 seconds to finish nodeSyncInternal
I0904 05:00:14.257064       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8
I0904 05:00:14.257127       1 pv_controller.go:1436] volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" deleted
I0904 05:00:14.257171       1 pv_controller.go:1284] deleteVolumeOperation [pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: success
I0904 05:00:14.274951       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8" with version 1714
I0904 05:00:14.275261       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: phase: Failed, bound to: "azuredisk-2790/pvc-rtf9g (uid: 9d23cae1-6da4-46dd-8204-a5861c44f2c8)", boundByController: true
I0904 05:00:14.275418       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: volume is bound to claim azuredisk-2790/pvc-rtf9g
I0904 05:00:14.275446       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: claim azuredisk-2790/pvc-rtf9g not found
I0904 05:00:14.275456       1 pv_controller.go:1108] reclaimVolume[pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8]: policy is Delete
I0904 05:00:14.275473       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]]
I0904 05:00:14.275484       1 pv_controller.go:1764] operation "delete-pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8[77771fb5-216f-417a-83ca-0f605f66d396]" is already running, skipping
I0904 05:00:14.275506       1 pv_protection_controller.go:205] Got event on PV pvc-9d23cae1-6da4-46dd-8204-a5861c44f2c8
... skipping 247 lines ...
I0904 05:00:32.612654       1 pv_controller.go:1108] reclaimVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: policy is Delete
I0904 05:00:32.612699       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]]
I0904 05:00:32.612709       1 pv_controller.go:1764] operation "delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]" is already running, skipping
I0904 05:00:32.612353       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578] started
I0904 05:00:32.617203       1 pv_controller.go:1341] isVolumeReleased[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is released
I0904 05:00:32.617396       1 pv_controller.go:1405] doDeleteVolume [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]
I0904 05:00:32.653510       1 pv_controller.go:1260] deletion of volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 05:00:32.653551       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: set phase Failed
I0904 05:00:32.653562       1 pv_controller.go:858] updating PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: set phase Failed
I0904 05:00:32.658371       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" with version 1797
I0904 05:00:32.658706       1 pv_controller.go:879] volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" entered phase "Failed"
I0904 05:00:32.658970       1 pv_controller.go:901] volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 05:00:32.659108       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" with version 1797
I0904 05:00:32.659451       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase: Failed, bound to: "azuredisk-5356/pvc-gkbpt (uid: 0533db44-5dda-4ad2-b6e3-a669b6845578)", boundByController: true
I0904 05:00:32.659819       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is bound to claim azuredisk-5356/pvc-gkbpt
I0904 05:00:32.660033       1 event.go:291] "Event occurred" object="pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:00:32.660285       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: claim azuredisk-5356/pvc-gkbpt not found
I0904 05:00:32.659124       1 pv_protection_controller.go:205] Got event on PV pvc-0533db44-5dda-4ad2-b6e3-a669b6845578
E0904 05:00:32.659780       1 goroutinemap.go:150] Operation for "delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]" failed. No retries permitted until 2022-09-04 05:00:33.159205714 +0000 UTC m=+534.952661121 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:00:32.661111       1 pv_controller.go:1108] reclaimVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: policy is Delete
I0904 05:00:32.661654       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]]
I0904 05:00:32.661938       1 pv_controller.go:1766] operation "delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]" postponed due to exponential backoff
I0904 05:00:33.680506       1 gc_controller.go:161] GC'ing orphaned
I0904 05:00:33.680568       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:00:35.097758       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0904 05:00:38.685687       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:00:38.858623       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:00:38.858737       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" with version 1797
I0904 05:00:38.858802       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase: Failed, bound to: "azuredisk-5356/pvc-gkbpt (uid: 0533db44-5dda-4ad2-b6e3-a669b6845578)", boundByController: true
I0904 05:00:38.858836       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is bound to claim azuredisk-5356/pvc-gkbpt
I0904 05:00:38.858860       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: claim azuredisk-5356/pvc-gkbpt not found
I0904 05:00:38.858873       1 pv_controller.go:1108] reclaimVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: policy is Delete
I0904 05:00:38.858893       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]]
I0904 05:00:38.858949       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578] started
I0904 05:00:38.864805       1 pv_controller.go:1341] isVolumeReleased[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is released
I0904 05:00:38.864828       1 pv_controller.go:1405] doDeleteVolume [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]
I0904 05:00:38.916398       1 pv_controller.go:1260] deletion of volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 05:00:38.916425       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: set phase Failed
I0904 05:00:38.916436       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase Failed already set
E0904 05:00:38.916502       1 goroutinemap.go:150] Operation for "delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]" failed. No retries permitted until 2022-09-04 05:00:39.91644557 +0000 UTC m=+541.709900977 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:00:40.151727       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 05:00:40.151779       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 to the node "capz-fvszkt-md-0-tjdcv" mounted false
I0904 05:00:40.173079       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-fvszkt-md-0-tjdcv" succeeded. VolumesAttached: []
I0904 05:00:40.173299       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578") on node "capz-fvszkt-md-0-tjdcv" 
I0904 05:00:40.173750       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 05:00:40.173869       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 to the node "capz-fvszkt-md-0-tjdcv" mounted false
... skipping 9 lines ...
I0904 05:00:53.605422       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:00:53.680704       1 gc_controller.go:161] GC'ing orphaned
I0904 05:00:53.680736       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:00:53.685967       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:00:53.859566       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:00:53.859646       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" with version 1797
I0904 05:00:53.859705       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase: Failed, bound to: "azuredisk-5356/pvc-gkbpt (uid: 0533db44-5dda-4ad2-b6e3-a669b6845578)", boundByController: true
I0904 05:00:53.859759       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is bound to claim azuredisk-5356/pvc-gkbpt
I0904 05:00:53.859777       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: claim azuredisk-5356/pvc-gkbpt not found
I0904 05:00:53.859785       1 pv_controller.go:1108] reclaimVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: policy is Delete
I0904 05:00:53.859801       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]]
I0904 05:00:53.859848       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578] started
I0904 05:00:53.873198       1 pv_controller.go:1341] isVolumeReleased[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is released
I0904 05:00:53.873221       1 pv_controller.go:1405] doDeleteVolume [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]
I0904 05:00:53.873262       1 pv_controller.go:1260] deletion of volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) since it's in attaching or detaching state
I0904 05:00:53.873279       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: set phase Failed
I0904 05:00:53.873290       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase Failed already set
E0904 05:00:53.873326       1 goroutinemap.go:150] Operation for "delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]" failed. No retries permitted until 2022-09-04 05:00:55.873298538 +0000 UTC m=+557.666753845 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) since it's in attaching or detaching state"
I0904 05:00:54.487422       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:00:55.562027       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-tjdcv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) returned with <nil>
I0904 05:00:55.562091       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578) succeeded
I0904 05:00:55.562103       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578 was detached from node:capz-fvszkt-md-0-tjdcv
I0904 05:00:55.562401       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578") on node "capz-fvszkt-md-0-tjdcv" 
I0904 05:00:56.605229       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 18 items received
I0904 05:01:01.324287       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="68.301µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44170" resp=200
I0904 05:01:08.687083       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:01:08.860235       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:01:08.860402       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" with version 1797
I0904 05:01:08.860471       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase: Failed, bound to: "azuredisk-5356/pvc-gkbpt (uid: 0533db44-5dda-4ad2-b6e3-a669b6845578)", boundByController: true
I0904 05:01:08.860636       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is bound to claim azuredisk-5356/pvc-gkbpt
I0904 05:01:08.860699       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: claim azuredisk-5356/pvc-gkbpt not found
I0904 05:01:08.860718       1 pv_controller.go:1108] reclaimVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: policy is Delete
I0904 05:01:08.860737       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]]
I0904 05:01:08.860798       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578] started
I0904 05:01:08.865918       1 pv_controller.go:1341] isVolumeReleased[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is released
... skipping 3 lines ...
I0904 05:01:13.681829       1 gc_controller.go:161] GC'ing orphaned
I0904 05:01:13.681876       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:01:14.083759       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578
I0904 05:01:14.083804       1 pv_controller.go:1436] volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" deleted
I0904 05:01:14.083826       1 pv_controller.go:1284] deleteVolumeOperation [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: success
I0904 05:01:14.102738       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0533db44-5dda-4ad2-b6e3-a669b6845578" with version 1857
I0904 05:01:14.103086       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: phase: Failed, bound to: "azuredisk-5356/pvc-gkbpt (uid: 0533db44-5dda-4ad2-b6e3-a669b6845578)", boundByController: true
I0904 05:01:14.103168       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: volume is bound to claim azuredisk-5356/pvc-gkbpt
I0904 05:01:14.103196       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: claim azuredisk-5356/pvc-gkbpt not found
I0904 05:01:14.103208       1 pv_controller.go:1108] reclaimVolume[pvc-0533db44-5dda-4ad2-b6e3-a669b6845578]: policy is Delete
I0904 05:01:14.103254       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0533db44-5dda-4ad2-b6e3-a669b6845578[223b687a-0857-47cf-a17e-85f3c563498f]]
I0904 05:01:14.103291       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0533db44-5dda-4ad2-b6e3-a669b6845578] started
I0904 05:01:14.103569       1 pv_protection_controller.go:205] Got event on PV pvc-0533db44-5dda-4ad2-b6e3-a669b6845578
... skipping 139 lines ...
I0904 05:01:24.025887       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-562bv, PodDisruptionBudget controller will avoid syncing.
I0904 05:01:24.026060       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-562bv"
I0904 05:01:24.126799       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0904 05:01:24.173397       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid 11954b2b-12d0-4f9f-be58-0fa019a5dfe1, event type delete
I0904 05:01:24.178759       1 publisher.go:181] Finished syncing namespace "azuredisk-5356" (5.650584ms)
I0904 05:01:24.246045       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-d7ck2, uid 8fd1f6f6-6a2f-47c1-a5a4-438377c8d53c, event type delete
E0904 05:01:24.263436       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-d8hjg" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0904 05:01:24.296625       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-59x7b.17118f97df0d0619, uid 341bf34a-4fed-4cf9-af02-0ce715fe9910, event type delete
I0904 05:01:24.299757       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-59x7b.17118f9929ba1850, uid ed700936-e794-46fa-bfd4-0879460d6409, event type delete
I0904 05:01:24.303982       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-59x7b.17118f9a6ada4e80, uid 57bf5a16-f665-4af0-bfe3-b72d82b98f9d, event type delete
I0904 05:01:24.309065       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-59x7b.17118f9a6d71ea7e, uid ddfa51c6-a956-4273-93db-8dd5674b1874, event type delete
I0904 05:01:24.314746       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-59x7b.17118f9a74b4ba72, uid 5d5a7506-a7fd-421c-9c58-199e2b370424, event type delete
I0904 05:01:24.319581       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name pvc-gkbpt.17118f971aa0400f, uid 99a608a7-ac9c-4fa8-bddc-f99723fd08da, event type delete
... skipping 798 lines ...
I0904 05:03:10.078584       1 pv_controller.go:1108] reclaimVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: policy is Delete
I0904 05:03:10.078614       1 pv_controller.go:1753] scheduleOperation[delete-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3[e2d8bc33-f7c6-45c6-ab75-35f82cd0dac9]]
I0904 05:03:10.078623       1 pv_controller.go:1764] operation "delete-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3[e2d8bc33-f7c6-45c6-ab75-35f82cd0dac9]" is already running, skipping
I0904 05:03:10.078513       1 pv_controller.go:1232] deleteVolumeOperation [pvc-294c705d-eae2-4f0f-855a-f9126bb56de3] started
I0904 05:03:10.172449       1 pv_controller.go:1341] isVolumeReleased[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: volume is released
I0904 05:03:10.172477       1 pv_controller.go:1405] doDeleteVolume [pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]
I0904 05:03:10.197315       1 pv_controller.go:1260] deletion of volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 05:03:10.197874       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: set phase Failed
I0904 05:03:10.197895       1 pv_controller.go:858] updating PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: set phase Failed
I0904 05:03:10.202742       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" with version 2127
I0904 05:03:10.202787       1 pv_controller.go:879] volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" entered phase "Failed"
I0904 05:03:10.202847       1 pv_controller.go:901] volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
E0904 05:03:10.202961       1 goroutinemap.go:150] Operation for "delete-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3[e2d8bc33-f7c6-45c6-ab75-35f82cd0dac9]" failed. No retries permitted until 2022-09-04 05:03:10.702872104 +0000 UTC m=+692.496327411 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:03:10.203383       1 event.go:291] "Event occurred" object="pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:03:10.203717       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" with version 2127
I0904 05:03:10.203804       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: phase: Failed, bound to: "azuredisk-5194/pvc-sdm8p (uid: 294c705d-eae2-4f0f-855a-f9126bb56de3)", boundByController: true
I0904 05:03:10.203840       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: volume is bound to claim azuredisk-5194/pvc-sdm8p
I0904 05:03:10.203937       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: claim azuredisk-5194/pvc-sdm8p not found
I0904 05:03:10.203948       1 pv_controller.go:1108] reclaimVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: policy is Delete
I0904 05:03:10.204003       1 pv_controller.go:1753] scheduleOperation[delete-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3[e2d8bc33-f7c6-45c6-ab75-35f82cd0dac9]]
I0904 05:03:10.204015       1 pv_controller.go:1766] operation "delete-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3[e2d8bc33-f7c6-45c6-ab75-35f82cd0dac9]" postponed due to exponential backoff
I0904 05:03:10.204086       1 pv_protection_controller.go:205] Got event on PV pvc-294c705d-eae2-4f0f-855a-f9126bb56de3
... skipping 25 lines ...
I0904 05:03:23.866711       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-ghfr8]: phase: Bound, bound to: "pvc-de334b68-9168-40d9-b091-5931e0467ee8", bindCompleted: true, boundByController: true
I0904 05:03:23.866754       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-ghfr8]: volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" found: phase: Bound, bound to: "azuredisk-5194/pvc-ghfr8 (uid: de334b68-9168-40d9-b091-5931e0467ee8)", boundByController: true
I0904 05:03:23.866783       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-ghfr8]: claim is already correctly bound
I0904 05:03:23.866645       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" with version 2127
I0904 05:03:23.866794       1 pv_controller.go:1012] binding volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" to claim "azuredisk-5194/pvc-ghfr8"
I0904 05:03:23.866804       1 pv_controller.go:910] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: binding to "azuredisk-5194/pvc-ghfr8"
I0904 05:03:23.866821       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: phase: Failed, bound to: "azuredisk-5194/pvc-sdm8p (uid: 294c705d-eae2-4f0f-855a-f9126bb56de3)", boundByController: true
I0904 05:03:23.866830       1 pv_controller.go:922] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: already bound to "azuredisk-5194/pvc-ghfr8"
I0904 05:03:23.866839       1 pv_controller.go:858] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: set phase Bound
I0904 05:03:23.866845       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: volume is bound to claim azuredisk-5194/pvc-sdm8p
I0904 05:03:23.866849       1 pv_controller.go:861] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase Bound already set
I0904 05:03:23.866874       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-ghfr8]: binding to "pvc-de334b68-9168-40d9-b091-5931e0467ee8"
I0904 05:03:23.866882       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: claim azuredisk-5194/pvc-sdm8p not found
... skipping 42 lines ...
I0904 05:03:26.604078       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 18 items received
I0904 05:03:26.604098       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
I0904 05:03:29.048488       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3
I0904 05:03:29.048550       1 pv_controller.go:1436] volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" deleted
I0904 05:03:29.048565       1 pv_controller.go:1284] deleteVolumeOperation [pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: success
I0904 05:03:29.059600       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-294c705d-eae2-4f0f-855a-f9126bb56de3" with version 2156
I0904 05:03:29.059669       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: phase: Failed, bound to: "azuredisk-5194/pvc-sdm8p (uid: 294c705d-eae2-4f0f-855a-f9126bb56de3)", boundByController: true
I0904 05:03:29.059719       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: volume is bound to claim azuredisk-5194/pvc-sdm8p
I0904 05:03:29.059742       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: claim azuredisk-5194/pvc-sdm8p not found
I0904 05:03:29.059751       1 pv_controller.go:1108] reclaimVolume[pvc-294c705d-eae2-4f0f-855a-f9126bb56de3]: policy is Delete
I0904 05:03:29.059768       1 pv_controller.go:1753] scheduleOperation[delete-pvc-294c705d-eae2-4f0f-855a-f9126bb56de3[e2d8bc33-f7c6-45c6-ab75-35f82cd0dac9]]
I0904 05:03:29.059813       1 pv_controller.go:1232] deleteVolumeOperation [pvc-294c705d-eae2-4f0f-855a-f9126bb56de3] started
I0904 05:03:29.059864       1 pv_protection_controller.go:205] Got event on PV pvc-294c705d-eae2-4f0f-855a-f9126bb56de3
... skipping 249 lines ...
I0904 05:04:09.773401       1 pv_controller.go:1108] reclaimVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: policy is Delete
I0904 05:04:09.773244       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61] started
I0904 05:04:09.773617       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]]
I0904 05:04:09.773790       1 pv_controller.go:1764] operation "delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]" is already running, skipping
I0904 05:04:09.775808       1 pv_controller.go:1341] isVolumeReleased[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: volume is released
I0904 05:04:09.775828       1 pv_controller.go:1405] doDeleteVolume [pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]
I0904 05:04:09.813452       1 pv_controller.go:1260] deletion of volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
I0904 05:04:09.813487       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: set phase Failed
I0904 05:04:09.813552       1 pv_controller.go:858] updating PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: set phase Failed
I0904 05:04:09.820350       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" with version 2227
I0904 05:04:09.820429       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: phase: Failed, bound to: "azuredisk-5194/pvc-9mkgb (uid: 2db200b7-bfb9-4f2a-a3af-50d429b29d61)", boundByController: true
I0904 05:04:09.820456       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: volume is bound to claim azuredisk-5194/pvc-9mkgb
I0904 05:04:09.820477       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: claim azuredisk-5194/pvc-9mkgb not found
I0904 05:04:09.820509       1 pv_controller.go:1108] reclaimVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: policy is Delete
I0904 05:04:09.820525       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]]
I0904 05:04:09.820533       1 pv_controller.go:1764] operation "delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]" is already running, skipping
I0904 05:04:09.820551       1 pv_protection_controller.go:205] Got event on PV pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61
I0904 05:04:09.821185       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" with version 2227
I0904 05:04:09.821241       1 pv_controller.go:879] volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" entered phase "Failed"
I0904 05:04:09.821255       1 pv_controller.go:901] volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted
E0904 05:04:09.821342       1 goroutinemap.go:150] Operation for "delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]" failed. No retries permitted until 2022-09-04 05:04:10.321279607 +0000 UTC m=+752.114735014 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:04:09.822108       1 event.go:291] "Event occurred" object="pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-tjdcv), could not be deleted"
I0904 05:04:10.481904       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 05:04:10.482391       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 to the node "capz-fvszkt-md-0-tjdcv" mounted false
I0904 05:04:10.555435       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-fvszkt-md-0-tjdcv" succeeded. VolumesAttached: []
I0904 05:04:10.555556       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61") on node "capz-fvszkt-md-0-tjdcv" 
I0904 05:04:10.563034       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
... skipping 32 lines ...
I0904 05:04:23.871259       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is bound to claim azuredisk-5194/pvc-ghfr8
I0904 05:04:23.871277       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: claim azuredisk-5194/pvc-ghfr8 found: phase: Bound, bound to: "pvc-de334b68-9168-40d9-b091-5931e0467ee8", bindCompleted: true, boundByController: true
I0904 05:04:23.871293       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: all is bound
I0904 05:04:23.871301       1 pv_controller.go:858] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: set phase Bound
I0904 05:04:23.871313       1 pv_controller.go:861] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase Bound already set
I0904 05:04:23.871330       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" with version 2227
I0904 05:04:23.871358       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: phase: Failed, bound to: "azuredisk-5194/pvc-9mkgb (uid: 2db200b7-bfb9-4f2a-a3af-50d429b29d61)", boundByController: true
I0904 05:04:23.871384       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: volume is bound to claim azuredisk-5194/pvc-9mkgb
I0904 05:04:23.871408       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: claim azuredisk-5194/pvc-9mkgb not found
I0904 05:04:23.871417       1 pv_controller.go:1108] reclaimVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: policy is Delete
I0904 05:04:23.871435       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]]
I0904 05:04:23.871491       1 pv_controller.go:1232] deleteVolumeOperation [pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61] started
I0904 05:04:23.885670       1 pv_controller.go:1341] isVolumeReleased[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: volume is released
I0904 05:04:23.885699       1 pv_controller.go:1405] doDeleteVolume [pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]
I0904 05:04:23.885738       1 pv_controller.go:1260] deletion of volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) since it's in attaching or detaching state
I0904 05:04:23.885757       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: set phase Failed
I0904 05:04:23.885770       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: phase Failed already set
E0904 05:04:23.885858       1 goroutinemap.go:150] Operation for "delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]" failed. No retries permitted until 2022-09-04 05:04:24.885780362 +0000 UTC m=+766.679235769 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) since it's in attaching or detaching state"
I0904 05:04:24.744868       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:04:25.198963       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0904 05:04:26.044784       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-tjdcv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) returned with <nil>
I0904 05:04:26.044835       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61) succeeded
I0904 05:04:26.044848       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61 was detached from node:capz-fvszkt-md-0-tjdcv
I0904 05:04:26.044876       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61") on node "capz-fvszkt-md-0-tjdcv" 
... skipping 2 lines ...
I0904 05:04:33.688095       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:04:38.608263       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0904 05:04:38.696505       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:04:38.870966       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:04:38.871329       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-ghfr8" with version 1890
I0904 05:04:38.871329       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" with version 2227
I0904 05:04:38.871616       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: phase: Failed, bound to: "azuredisk-5194/pvc-9mkgb (uid: 2db200b7-bfb9-4f2a-a3af-50d429b29d61)", boundByController: true
I0904 05:04:38.871924       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: volume is bound to claim azuredisk-5194/pvc-9mkgb
I0904 05:04:38.872339       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: claim azuredisk-5194/pvc-9mkgb not found
I0904 05:04:38.872511       1 pv_controller.go:1108] reclaimVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: policy is Delete
I0904 05:04:38.872543       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]]
I0904 05:04:38.871789       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-ghfr8]: phase: Bound, bound to: "pvc-de334b68-9168-40d9-b091-5931e0467ee8", bindCompleted: true, boundByController: true
I0904 05:04:38.876115       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-ghfr8]: volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" found: phase: Bound, bound to: "azuredisk-5194/pvc-ghfr8 (uid: de334b68-9168-40d9-b091-5931e0467ee8)", boundByController: true
... skipping 24 lines ...
I0904 05:04:44.073915       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61
I0904 05:04:44.073956       1 pv_controller.go:1436] volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" deleted
I0904 05:04:44.073969       1 pv_controller.go:1284] deleteVolumeOperation [pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: success
I0904 05:04:44.091134       1 pv_protection_controller.go:205] Got event on PV pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61
I0904 05:04:44.091447       1 pv_protection_controller.go:125] Processing PV pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61
I0904 05:04:44.092067       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61" with version 2278
I0904 05:04:44.095060       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: phase: Failed, bound to: "azuredisk-5194/pvc-9mkgb (uid: 2db200b7-bfb9-4f2a-a3af-50d429b29d61)", boundByController: true
I0904 05:04:44.095307       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: volume is bound to claim azuredisk-5194/pvc-9mkgb
I0904 05:04:44.095520       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: claim azuredisk-5194/pvc-9mkgb not found
I0904 05:04:44.095544       1 pv_controller.go:1108] reclaimVolume[pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61]: policy is Delete
I0904 05:04:44.095681       1 pv_controller.go:1753] scheduleOperation[delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]]
I0904 05:04:44.095856       1 pv_controller.go:1764] operation "delete-pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61[1430eaa3-2ad6-44da-b2d9-442b77c0649a]" is already running, skipping
I0904 05:04:44.098554       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-2db200b7-bfb9-4f2a-a3af-50d429b29d61
... skipping 152 lines ...
I0904 05:05:19.940716       1 pv_controller.go:1108] reclaimVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: policy is Delete
I0904 05:05:19.940730       1 pv_controller.go:1753] scheduleOperation[delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]]
I0904 05:05:19.940743       1 pv_controller.go:1764] operation "delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]" is already running, skipping
I0904 05:05:19.940770       1 pv_controller.go:1232] deleteVolumeOperation [pvc-de334b68-9168-40d9-b091-5931e0467ee8] started
I0904 05:05:20.024631       1 pv_controller.go:1341] isVolumeReleased[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is released
I0904 05:05:20.024667       1 pv_controller.go:1405] doDeleteVolume [pvc-de334b68-9168-40d9-b091-5931e0467ee8]
I0904 05:05:20.049636       1 pv_controller.go:1260] deletion of volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:05:20.049918       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: set phase Failed
I0904 05:05:20.050121       1 pv_controller.go:858] updating PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: set phase Failed
I0904 05:05:20.054905       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" with version 2344
I0904 05:05:20.055721       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase: Failed, bound to: "azuredisk-5194/pvc-ghfr8 (uid: de334b68-9168-40d9-b091-5931e0467ee8)", boundByController: true
I0904 05:05:20.056532       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is bound to claim azuredisk-5194/pvc-ghfr8
I0904 05:05:20.056876       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: claim azuredisk-5194/pvc-ghfr8 not found
I0904 05:05:20.057076       1 pv_controller.go:1108] reclaimVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: policy is Delete
I0904 05:05:20.057270       1 pv_controller.go:1753] scheduleOperation[delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]]
I0904 05:05:20.057483       1 pv_controller.go:1764] operation "delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]" is already running, skipping
I0904 05:05:20.057701       1 pv_protection_controller.go:205] Got event on PV pvc-de334b68-9168-40d9-b091-5931e0467ee8
I0904 05:05:20.058159       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" with version 2344
I0904 05:05:20.058394       1 pv_controller.go:879] volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" entered phase "Failed"
I0904 05:05:20.058588       1 pv_controller.go:901] volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
E0904 05:05:20.058848       1 goroutinemap.go:150] Operation for "delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]" failed. No retries permitted until 2022-09-04 05:05:20.558812576 +0000 UTC m=+822.352267983 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:05:20.059631       1 event.go:291] "Event occurred" object="pvc-de334b68-9168-40d9-b091-5931e0467ee8" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:05:20.378183       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 05:05:20.378226       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8 to the node "capz-fvszkt-md-0-jvr4s" mounted false
I0904 05:05:20.483100       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 05:05:20.483142       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8 to the node "capz-fvszkt-md-0-jvr4s" mounted false
I0904 05:05:20.483511       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-fvszkt-md-0-jvr4s" succeeded. VolumesAttached: []
... skipping 5 lines ...
I0904 05:05:21.323780       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="85.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:59114" resp=200
I0904 05:05:23.610040       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:05:23.698761       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:05:23.817449       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-jvr4s ReadyCondition updated. Updating timestamp.
I0904 05:05:23.873920       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:05:23.874051       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" with version 2344
I0904 05:05:23.874326       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase: Failed, bound to: "azuredisk-5194/pvc-ghfr8 (uid: de334b68-9168-40d9-b091-5931e0467ee8)", boundByController: true
I0904 05:05:23.874421       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is bound to claim azuredisk-5194/pvc-ghfr8
I0904 05:05:23.874486       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: claim azuredisk-5194/pvc-ghfr8 not found
I0904 05:05:23.874502       1 pv_controller.go:1108] reclaimVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: policy is Delete
I0904 05:05:23.874522       1 pv_controller.go:1753] scheduleOperation[delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]]
I0904 05:05:23.874617       1 pv_controller.go:1232] deleteVolumeOperation [pvc-de334b68-9168-40d9-b091-5931e0467ee8] started
I0904 05:05:23.885021       1 pv_controller.go:1341] isVolumeReleased[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is released
I0904 05:05:23.885043       1 pv_controller.go:1405] doDeleteVolume [pvc-de334b68-9168-40d9-b091-5931e0467ee8]
I0904 05:05:23.885109       1 pv_controller.go:1260] deletion of volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) since it's in attaching or detaching state
I0904 05:05:23.885124       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: set phase Failed
I0904 05:05:23.885136       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase Failed already set
E0904 05:05:23.885206       1 goroutinemap.go:150] Operation for "delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]" failed. No retries permitted until 2022-09-04 05:05:24.885145787 +0000 UTC m=+826.678601094 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) since it's in attaching or detaching state"
I0904 05:05:24.809883       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:05:30.606158       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0904 05:05:31.325316       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="89.801µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:56408" resp=200
I0904 05:05:33.608393       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 0 items received
I0904 05:05:33.689692       1 gc_controller.go:161] GC'ing orphaned
I0904 05:05:33.689736       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:05:36.079532       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) returned with <nil>
I0904 05:05:36.079617       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8) succeeded
I0904 05:05:36.079630       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8 was detached from node:capz-fvszkt-md-0-jvr4s
I0904 05:05:36.079709       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8") on node "capz-fvszkt-md-0-jvr4s" 
I0904 05:05:38.699222       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:05:38.874875       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:05:38.874963       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" with version 2344
I0904 05:05:38.875008       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase: Failed, bound to: "azuredisk-5194/pvc-ghfr8 (uid: de334b68-9168-40d9-b091-5931e0467ee8)", boundByController: true
I0904 05:05:38.875052       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is bound to claim azuredisk-5194/pvc-ghfr8
I0904 05:05:38.875076       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: claim azuredisk-5194/pvc-ghfr8 not found
I0904 05:05:38.875086       1 pv_controller.go:1108] reclaimVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: policy is Delete
I0904 05:05:38.875105       1 pv_controller.go:1753] scheduleOperation[delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]]
I0904 05:05:38.875147       1 pv_controller.go:1232] deleteVolumeOperation [pvc-de334b68-9168-40d9-b091-5931e0467ee8] started
I0904 05:05:38.888554       1 pv_controller.go:1341] isVolumeReleased[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is released
I0904 05:05:38.888795       1 pv_controller.go:1405] doDeleteVolume [pvc-de334b68-9168-40d9-b091-5931e0467ee8]
I0904 05:05:41.324558       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="92.001µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36152" resp=200
I0904 05:05:44.108705       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-de334b68-9168-40d9-b091-5931e0467ee8
I0904 05:05:44.108743       1 pv_controller.go:1436] volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" deleted
I0904 05:05:44.108778       1 pv_controller.go:1284] deleteVolumeOperation [pvc-de334b68-9168-40d9-b091-5931e0467ee8]: success
I0904 05:05:44.126375       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de334b68-9168-40d9-b091-5931e0467ee8" with version 2381
I0904 05:05:44.126464       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: phase: Failed, bound to: "azuredisk-5194/pvc-ghfr8 (uid: de334b68-9168-40d9-b091-5931e0467ee8)", boundByController: true
I0904 05:05:44.126496       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: volume is bound to claim azuredisk-5194/pvc-ghfr8
I0904 05:05:44.126516       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: claim azuredisk-5194/pvc-ghfr8 not found
I0904 05:05:44.126527       1 pv_controller.go:1108] reclaimVolume[pvc-de334b68-9168-40d9-b091-5931e0467ee8]: policy is Delete
I0904 05:05:44.126578       1 pv_controller.go:1753] scheduleOperation[delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]]
I0904 05:05:44.126586       1 pv_controller.go:1764] operation "delete-pvc-de334b68-9168-40d9-b091-5931e0467ee8[ac52cc2a-d814-4997-a748-b7894bc2f77f]" is already running, skipping
I0904 05:05:44.126611       1 pv_protection_controller.go:205] Got event on PV pvc-de334b68-9168-40d9-b091-5931e0467ee8
... skipping 39 lines ...
I0904 05:05:49.091989       1 pvc_protection_controller.go:159] "Finished processing PVC" PVC="azuredisk-1353/pvc-sqcdb" duration="6.5µs"
I0904 05:05:49.104626       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679" (70.814735ms)
I0904 05:05:49.104890       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679", timestamp:time.Time{wall:0xc0bd2a6b420d8a54, ext:850827897319, loc:(*time.Location)(0x731ea80)}}
I0904 05:05:49.105110       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 05:05:49.105706       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679"
I0904 05:05:49.109677       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc" duration="84.23473ms"
I0904 05:05:49.109721       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-ghlbc\": the object has been modified; please apply your changes to the latest version and try again"
I0904 05:05:49.109768       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc" startTime="2022-09-04 05:05:49.109743012 +0000 UTC m=+850.903198319"
I0904 05:05:49.110421       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-ghlbc" timed out (false) [last progress check: 2022-09-04 05:05:49 +0000 UTC - now: 2022-09-04 05:05:49.110415622 +0000 UTC m=+850.903870929]
I0904 05:05:49.116034       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc"
I0904 05:05:49.116825       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-sqcdb" with version 2407
I0904 05:05:49.117594       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-sqcdb]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0904 05:05:49.117815       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-sqcdb]: no volume found
... skipping 261 lines ...
I0904 05:06:15.105980       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d, PodDisruptionBudget controller will avoid syncing.
I0904 05:06:15.106124       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d"
I0904 05:06:15.105694       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d"
I0904 05:06:15.105444       1 replica_set.go:439] Pod azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d updated, objectMeta {Name:azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d GenerateName:azuredisk-volume-tester-ghlbc-545bf6f679- Namespace:azuredisk-1353 SelfLink: UID:d612aef2-a658-45ae-a9bd-a73a9dc1d2a3 ResourceVersion:2498 Generation:0 CreationTimestamp:2022-09-04 05:06:15 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:545bf6f679] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-ghlbc-545bf6f679 UID:c07823cf-8373-4c61-b4a8-593c52eacbbe Controller:0xc00216aa47 BlockOwnerDeletion:0xc00216aa48}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 05:06:15 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07823cf-8373-4c61-b4a8-593c52eacbbe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]} -> {Name:azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d GenerateName:azuredisk-volume-tester-ghlbc-545bf6f679- Namespace:azuredisk-1353 SelfLink: UID:d612aef2-a658-45ae-a9bd-a73a9dc1d2a3 ResourceVersion:2500 Generation:0 CreationTimestamp:2022-09-04 05:06:15 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:545bf6f679] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-ghlbc-545bf6f679 UID:c07823cf-8373-4c61-b4a8-593c52eacbbe Controller:0xc001fdeede BlockOwnerDeletion:0xc001fdeedf}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 05:06:15 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c07823cf-8373-4c61-b4a8-593c52eacbbe\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]}.
I0904 05:06:15.107294       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679"
I0904 05:06:15.107566       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc" startTime="2022-09-04 05:06:15.107540923 +0000 UTC m=+876.900996330"
W0904 05:06:15.114030       1 reconciler.go:385] Multi-Attach error for volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be") from node "capz-fvszkt-md-0-jvr4s" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679-pv4hd on node capz-fvszkt-md-0-tjdcv
I0904 05:06:15.114583       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be\" Volume is already used by pod(s) azuredisk-volume-tester-ghlbc-545bf6f679-pv4hd"
I0904 05:06:15.120024       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc"
I0904 05:06:15.122801       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679" (87.088567ms)
I0904 05:06:15.123022       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679", timestamp:time.Time{wall:0xc0bd2a71c22200df, ext:876829238286, loc:(*time.Location)(0x731ea80)}}
I0904 05:06:15.123245       1 controller_utils.go:972] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679-pv4hd in state Running, deletion time 2022-09-04 05:06:45 +0000 UTC
I0904 05:06:15.123470       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-ghlbc-545bf6f679, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 05:06:15.124209       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-ghlbc" duration="16.649243ms"
... skipping 600 lines ...
I0904 05:09:09.767640       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: claim azuredisk-1353/pvc-sqcdb not found
I0904 05:09:09.767669       1 pv_controller.go:1108] reclaimVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: policy is Delete
I0904 05:09:09.767750       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]]
I0904 05:09:09.767761       1 pv_controller.go:1764] operation "delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]" is already running, skipping
I0904 05:09:09.769848       1 pv_controller.go:1341] isVolumeReleased[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is released
I0904 05:09:09.770011       1 pv_controller.go:1405] doDeleteVolume [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]
I0904 05:09:09.792228       1 pv_controller.go:1260] deletion of volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:09:09.792257       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: set phase Failed
I0904 05:09:09.792267       1 pv_controller.go:858] updating PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: set phase Failed
I0904 05:09:09.796820       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" with version 2791
I0904 05:09:09.796872       1 pv_controller.go:879] volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" entered phase "Failed"
I0904 05:09:09.796948       1 pv_controller.go:901] volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
E0904 05:09:09.797725       1 goroutinemap.go:150] Operation for "delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]" failed. No retries permitted until 2022-09-04 05:09:10.297668955 +0000 UTC m=+1052.091124362 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:09:09.798130       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" with version 2791
I0904 05:09:09.798211       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: phase: Failed, bound to: "azuredisk-1353/pvc-sqcdb (uid: eb2a6500-5e02-4568-b8a7-9bdda5ea73be)", boundByController: true
I0904 05:09:09.798328       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is bound to claim azuredisk-1353/pvc-sqcdb
I0904 05:09:09.798479       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: claim azuredisk-1353/pvc-sqcdb not found
I0904 05:09:09.798579       1 pv_controller.go:1108] reclaimVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: policy is Delete
I0904 05:09:09.798942       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]]
I0904 05:09:09.799138       1 pv_controller.go:1766] operation "delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]" postponed due to exponential backoff
I0904 05:09:09.798942       1 pv_protection_controller.go:205] Got event on PV pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be
... skipping 19 lines ...
I0904 05:09:18.722309       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 05:09:21.323699       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="221.803µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:50848" resp=200
I0904 05:09:23.615506       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:09:23.712413       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:09:23.887095       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:09:23.887226       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" with version 2791
I0904 05:09:23.887269       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: phase: Failed, bound to: "azuredisk-1353/pvc-sqcdb (uid: eb2a6500-5e02-4568-b8a7-9bdda5ea73be)", boundByController: true
I0904 05:09:23.887321       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is bound to claim azuredisk-1353/pvc-sqcdb
I0904 05:09:23.887357       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: claim azuredisk-1353/pvc-sqcdb not found
I0904 05:09:23.887379       1 pv_controller.go:1108] reclaimVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: policy is Delete
I0904 05:09:23.887399       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]]
I0904 05:09:23.887451       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be] started
I0904 05:09:23.898345       1 pv_controller.go:1341] isVolumeReleased[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is released
I0904 05:09:23.898369       1 pv_controller.go:1405] doDeleteVolume [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]
I0904 05:09:23.898412       1 pv_controller.go:1260] deletion of volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) since it's in attaching or detaching state
I0904 05:09:23.898429       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: set phase Failed
I0904 05:09:23.898443       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: phase Failed already set
E0904 05:09:23.898504       1 goroutinemap.go:150] Operation for "delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]" failed. No retries permitted until 2022-09-04 05:09:24.898453894 +0000 UTC m=+1066.691909301 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) since it's in attaching or detaching state"
I0904 05:09:25.109495       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:09:26.290013       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) returned with <nil>
I0904 05:09:26.290081       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be) succeeded
I0904 05:09:26.290092       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be was detached from node:capz-fvszkt-md-0-jvr4s
I0904 05:09:26.290118       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be") on node "capz-fvszkt-md-0-jvr4s" 
I0904 05:09:31.323984       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="85.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44840" resp=200
... skipping 2 lines ...
I0904 05:09:35.434073       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 05:09:35.589793       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 18 items received
I0904 05:09:38.712730       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:09:38.713992       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 05:09:38.887695       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:09:38.887973       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" with version 2791
I0904 05:09:38.888046       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: phase: Failed, bound to: "azuredisk-1353/pvc-sqcdb (uid: eb2a6500-5e02-4568-b8a7-9bdda5ea73be)", boundByController: true
I0904 05:09:38.888107       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is bound to claim azuredisk-1353/pvc-sqcdb
I0904 05:09:38.888129       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: claim azuredisk-1353/pvc-sqcdb not found
I0904 05:09:38.888157       1 pv_controller.go:1108] reclaimVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: policy is Delete
I0904 05:09:38.888175       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]]
I0904 05:09:38.888217       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be] started
I0904 05:09:38.902888       1 pv_controller.go:1341] isVolumeReleased[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is released
I0904 05:09:38.902911       1 pv_controller.go:1405] doDeleteVolume [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]
I0904 05:09:41.323650       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:55634" resp=200
I0904 05:09:44.170331       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be
I0904 05:09:44.170370       1 pv_controller.go:1436] volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" deleted
I0904 05:09:44.170386       1 pv_controller.go:1284] deleteVolumeOperation [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: success
I0904 05:09:44.181972       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be" with version 2845
I0904 05:09:44.182030       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: phase: Failed, bound to: "azuredisk-1353/pvc-sqcdb (uid: eb2a6500-5e02-4568-b8a7-9bdda5ea73be)", boundByController: true
I0904 05:09:44.182326       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: volume is bound to claim azuredisk-1353/pvc-sqcdb
I0904 05:09:44.182357       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: claim azuredisk-1353/pvc-sqcdb not found
I0904 05:09:44.182479       1 pv_controller.go:1108] reclaimVolume[pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be]: policy is Delete
I0904 05:09:44.182505       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be[b629ac0f-1560-473a-b621-644f79c05735]]
I0904 05:09:44.182625       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be] started
I0904 05:09:44.182949       1 pv_protection_controller.go:205] Got event on PV pvc-eb2a6500-5e02-4568-b8a7-9bdda5ea73be
... skipping 109 lines ...
I0904 05:09:53.889377       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-4538/pvc-ttj87]: already bound to "pvc-022d0fe4-76bd-49be-af53-d70851348963"
I0904 05:09:53.889397       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-4538/pvc-ttj87] status: set phase Bound
I0904 05:09:53.889421       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4538/pvc-ttj87] status: phase Bound already set
I0904 05:09:53.889435       1 pv_controller.go:1038] volume "pvc-022d0fe4-76bd-49be-af53-d70851348963" bound to claim "azuredisk-4538/pvc-ttj87"
I0904 05:09:53.889453       1 pv_controller.go:1039] volume "pvc-022d0fe4-76bd-49be-af53-d70851348963" status after binding: phase: Bound, bound to: "azuredisk-4538/pvc-ttj87 (uid: 022d0fe4-76bd-49be-af53-d70851348963)", boundByController: true
I0904 05:09:53.889470       1 pv_controller.go:1040] claim "azuredisk-4538/pvc-ttj87" status after binding: phase: Bound, bound to: "pvc-022d0fe4-76bd-49be-af53-d70851348963", bindCompleted: true, boundByController: true
E0904 05:09:53.984971       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-xvlrw" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0904 05:09:53.990056       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d.17118fea92021104, uid 3feb875c-7186-46ba-b831-2f5af097bcdb, event type delete
I0904 05:09:54.034780       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d.17118fea922d5365, uid e50dbeb9-9dac-4428-b966-ca8968cd9f8c, event type delete
I0904 05:09:54.181502       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d.17118ffa0e647341, uid 9c9c2fd9-9be4-4c04-ba37-507b084fcbe0, event type delete
I0904 05:09:54.237926       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d.1711900736aaadb7, uid cb1c0f57-e0a7-421f-b694-e9ad7fcc2fec, event type delete
I0904 05:09:54.379942       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d.17119009c31268b5, uid 85bbc920-3861-4729-9935-d2ccdee94d60, event type delete
I0904 05:09:54.432071       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-ghlbc-545bf6f679-dkp9d.17119009c65f043e, uid 91daaf56-3a20-495d-a68d-4975e5a5c4c9, event type delete
... skipping 197 lines ...
I0904 05:10:12.236428       1 pv_controller.go:1753] scheduleOperation[provision-azuredisk-59/pvc-g256d[f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]]
I0904 05:10:12.236437       1 pv_controller.go:1764] operation "provision-azuredisk-59/pvc-g256d[f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]" is already running, skipping
I0904 05:10:12.236466       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-59/pvc-g256d"
I0904 05:10:12.238502       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 StorageAccountType:StandardSSD_LRS Size:10
I0904 05:10:12.311401       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8266
I0904 05:10:12.364060       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-547hp, uid f774616c-dd90-463e-a502-3625ce088cdc, event type delete
E0904 05:10:12.380476       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-6nnj4" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0904 05:10:12.446809       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0904 05:10:12.446903       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.901µs)
I0904 05:10:12.446935       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid a72d0484-8fea-49d3-ba9b-e83611eb7a4f, event type delete
I0904 05:10:12.516780       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8266, name kube-root-ca.crt, uid bd509296-3136-46b7-945d-23029d7c104d, event type delete
I0904 05:10:12.520294       1 publisher.go:181] Finished syncing namespace "azuredisk-8266" (4.019289ms)
I0904 05:10:12.525491       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.6µs)
... skipping 5 lines ...
I0904 05:10:13.714275       1 controller.go:291] nodeSync has been triggered
I0904 05:10:13.714305       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 05:10:13.714316       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 05:10:13.714323       1 controller.go:731] It took 2.0101e-05 seconds to finish nodeSyncInternal
I0904 05:10:13.797664       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4376
I0904 05:10:13.836182       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4376, name default-token-rt9r9, uid 87a6aa39-4592-45b7-82e9-66d0be4b3c50, event type delete
E0904 05:10:13.865323       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4376/default: secrets "default-token-2zjmq" is forbidden: unable to create new content in namespace azuredisk-4376 because it is being terminated
I0904 05:10:13.866067       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4376/default), service account deleted, removing tokens
I0904 05:10:13.866334       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (2.8µs)
I0904 05:10:13.866415       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4376, name default, uid 637eb2e3-d6a6-447d-bcd2-745f42e00c78, event type delete
I0904 05:10:13.956857       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4376, name kube-root-ca.crt, uid 5c19902d-e03f-4fd4-998d-55056ad644f6, event type delete
I0904 05:10:13.960137       1 publisher.go:181] Finished syncing namespace "azuredisk-4376" (2.453736ms)
I0904 05:10:14.019665       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (3.4µs)
... skipping 183 lines ...
I0904 05:10:15.274115       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7996, name kube-root-ca.crt, uid 6ed12a94-1f23-4809-a262-b12ab5ed1d77, event type delete
I0904 05:10:15.282897       1 publisher.go:181] Finished syncing namespace "azuredisk-7996" (8.762028ms)
I0904 05:10:15.337336       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e") from node "capz-fvszkt-md-0-jvr4s" 
I0904 05:10:15.337383       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3") from node "capz-fvszkt-md-0-jvr4s" 
I0904 05:10:15.337410       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8") from node "capz-fvszkt-md-0-jvr4s" 
I0904 05:10:15.374546       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7996, name default-token-655ht, uid b89868d3-e64e-4fd3-96aa-7ddea653191d, event type delete
E0904 05:10:15.395745       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7996/default: secrets "default-token-xjtcl" is forbidden: unable to create new content in namespace azuredisk-7996 because it is being terminated
I0904 05:10:15.432190       1 azure_backoff.go:109] VirtualMachinesClient.List(capz-fvszkt) success
I0904 05:10:15.434951       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7996/default), service account deleted, removing tokens
I0904 05:10:15.435174       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (3.2µs)
I0904 05:10:15.435230       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7996, name default, uid fed586f4-99b2-484a-914d-7ec45f04e50c, event type delete
I0904 05:10:15.492290       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" to node "capz-fvszkt-md-0-jvr4s".
I0904 05:10:15.492363       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" to node "capz-fvszkt-md-0-jvr4s".
... skipping 291 lines ...
I0904 05:10:51.026069       1 pv_controller.go:1108] reclaimVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: policy is Delete
I0904 05:10:51.026248       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]]
I0904 05:10:51.026267       1 pv_controller.go:1764] operation "delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]" is already running, skipping
I0904 05:10:51.025943       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8] started
I0904 05:10:51.028804       1 pv_controller.go:1341] isVolumeReleased[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is released
I0904 05:10:51.028837       1 pv_controller.go:1405] doDeleteVolume [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]
I0904 05:10:51.051608       1 pv_controller.go:1260] deletion of volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:10:51.052294       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: set phase Failed
I0904 05:10:51.052739       1 pv_controller.go:858] updating PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: set phase Failed
I0904 05:10:51.058179       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" with version 3089
I0904 05:10:51.058831       1 pv_controller.go:879] volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" entered phase "Failed"
I0904 05:10:51.059122       1 pv_controller.go:901] volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
E0904 05:10:51.059466       1 goroutinemap.go:150] Operation for "delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]" failed. No retries permitted until 2022-09-04 05:10:51.559427988 +0000 UTC m=+1153.352883395 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:10:51.058764       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" with version 3089
I0904 05:10:51.059724       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase: Failed, bound to: "azuredisk-59/pvc-g256d (uid: f0fe9061-8d71-4d25-bc3d-145abb9f3ba8)", boundByController: true
I0904 05:10:51.059754       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is bound to claim azuredisk-59/pvc-g256d
I0904 05:10:51.059799       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: claim azuredisk-59/pvc-g256d not found
I0904 05:10:51.059810       1 pv_controller.go:1108] reclaimVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: policy is Delete
I0904 05:10:51.059825       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]]
I0904 05:10:51.059834       1 pv_controller.go:1766] operation "delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]" postponed due to exponential backoff
I0904 05:10:51.058782       1 pv_protection_controller.go:205] Got event on PV pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8
... skipping 42 lines ...
I0904 05:10:53.891717       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: volume is bound to claim azuredisk-59/pvc-rwv25
I0904 05:10:53.891728       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: claim azuredisk-59/pvc-rwv25 found: phase: Bound, bound to: "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3", bindCompleted: true, boundByController: true
I0904 05:10:53.891743       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: all is bound
I0904 05:10:53.891751       1 pv_controller.go:858] updating PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: set phase Bound
I0904 05:10:53.891759       1 pv_controller.go:861] updating PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: phase Bound already set
I0904 05:10:53.891774       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" with version 3089
I0904 05:10:53.891786       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase: Failed, bound to: "azuredisk-59/pvc-g256d (uid: f0fe9061-8d71-4d25-bc3d-145abb9f3ba8)", boundByController: true
I0904 05:10:53.891802       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is bound to claim azuredisk-59/pvc-g256d
I0904 05:10:53.891815       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: claim azuredisk-59/pvc-g256d not found
I0904 05:10:53.891822       1 pv_controller.go:1108] reclaimVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: policy is Delete
I0904 05:10:53.891848       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]]
I0904 05:10:53.891890       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" with version 3004
I0904 05:10:53.891911       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase: Bound, bound to: "azuredisk-59/pvc-fbhmk (uid: 999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e)", boundByController: true
... skipping 2 lines ...
I0904 05:10:53.891982       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: all is bound
I0904 05:10:53.891986       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8] started
I0904 05:10:53.891988       1 pv_controller.go:858] updating PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: set phase Bound
I0904 05:10:53.891998       1 pv_controller.go:861] updating PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase Bound already set
I0904 05:10:53.901749       1 pv_controller.go:1341] isVolumeReleased[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is released
I0904 05:10:53.901773       1 pv_controller.go:1405] doDeleteVolume [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]
I0904 05:10:53.924527       1 pv_controller.go:1260] deletion of volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:10:53.924565       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: set phase Failed
I0904 05:10:53.924576       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase Failed already set
E0904 05:10:53.924652       1 goroutinemap.go:150] Operation for "delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]" failed. No retries permitted until 2022-09-04 05:10:54.924617615 +0000 UTC m=+1156.718073022 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:10:55.217626       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:11:00.053399       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 6 items received
I0904 05:11:00.711924       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 05:11:00.711966       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8 to the node "capz-fvszkt-md-0-jvr4s" mounted false
I0904 05:11:00.711980       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3 to the node "capz-fvszkt-md-0-jvr4s" mounted false
I0904 05:11:00.711988       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e to the node "capz-fvszkt-md-0-jvr4s" mounted false
... skipping 61 lines ...
I0904 05:11:08.892157       1 pv_controller.go:858] updating PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: set phase Bound
I0904 05:11:08.892165       1 pv_controller.go:861] updating PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: phase Bound already set
I0904 05:11:08.892173       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-59/pvc-rwv25]: binding to "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3"
I0904 05:11:08.892194       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" with version 3089
I0904 05:11:08.892268       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-59/pvc-rwv25]: already bound to "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3"
I0904 05:11:08.892279       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-59/pvc-rwv25] status: set phase Bound
I0904 05:11:08.892290       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase: Failed, bound to: "azuredisk-59/pvc-g256d (uid: f0fe9061-8d71-4d25-bc3d-145abb9f3ba8)", boundByController: true
I0904 05:11:08.892300       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-rwv25] status: phase Bound already set
I0904 05:11:08.892376       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is bound to claim azuredisk-59/pvc-g256d
I0904 05:11:08.892380       1 pv_controller.go:1038] volume "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3" bound to claim "azuredisk-59/pvc-rwv25"
I0904 05:11:08.892419       1 pv_controller.go:1039] volume "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-rwv25 (uid: f71b1c32-aa94-4e8a-afa5-48e797807bf3)", boundByController: true
I0904 05:11:08.892451       1 pv_controller.go:1040] claim "azuredisk-59/pvc-rwv25" status after binding: phase: Bound, bound to: "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3", bindCompleted: true, boundByController: true
I0904 05:11:08.892468       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: claim azuredisk-59/pvc-g256d not found
... skipping 6 lines ...
I0904 05:11:08.893190       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: claim azuredisk-59/pvc-fbhmk found: phase: Bound, bound to: "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e", bindCompleted: true, boundByController: true
I0904 05:11:08.893205       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: all is bound
I0904 05:11:08.893213       1 pv_controller.go:858] updating PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: set phase Bound
I0904 05:11:08.893222       1 pv_controller.go:861] updating PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase Bound already set
I0904 05:11:08.903611       1 pv_controller.go:1341] isVolumeReleased[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is released
I0904 05:11:08.903635       1 pv_controller.go:1405] doDeleteVolume [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]
I0904 05:11:08.903801       1 pv_controller.go:1260] deletion of volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) since it's in attaching or detaching state
I0904 05:11:08.903825       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: set phase Failed
I0904 05:11:08.903836       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase Failed already set
E0904 05:11:08.903877       1 goroutinemap.go:150] Operation for "delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]" failed. No retries permitted until 2022-09-04 05:11:10.903845492 +0000 UTC m=+1172.697300899 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) since it's in attaching or detaching state"
I0904 05:11:11.325405       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="94.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:58648" resp=200
I0904 05:11:12.600549       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 29 items received
I0904 05:11:13.703202       1 gc_controller.go:161] GC'ing orphaned
I0904 05:11:13.703242       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:11:16.275847       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) returned with <nil>
I0904 05:11:16.275908       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8) succeeded
... skipping 43 lines ...
I0904 05:11:23.892937       1 pv_controller.go:1040] claim "azuredisk-59/pvc-rwv25" status after binding: phase: Bound, bound to: "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3", bindCompleted: true, boundByController: true
I0904 05:11:23.892307       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: claim azuredisk-59/pvc-rwv25 found: phase: Bound, bound to: "pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3", bindCompleted: true, boundByController: true
I0904 05:11:23.893025       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: all is bound
I0904 05:11:23.893032       1 pv_controller.go:858] updating PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: set phase Bound
I0904 05:11:23.893042       1 pv_controller.go:861] updating PersistentVolume[pvc-f71b1c32-aa94-4e8a-afa5-48e797807bf3]: phase Bound already set
I0904 05:11:23.893058       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" with version 3089
I0904 05:11:23.893084       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase: Failed, bound to: "azuredisk-59/pvc-g256d (uid: f0fe9061-8d71-4d25-bc3d-145abb9f3ba8)", boundByController: true
I0904 05:11:23.893109       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is bound to claim azuredisk-59/pvc-g256d
I0904 05:11:23.893134       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: claim azuredisk-59/pvc-g256d not found
I0904 05:11:23.893146       1 pv_controller.go:1108] reclaimVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: policy is Delete
I0904 05:11:23.893165       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]]
I0904 05:11:23.893192       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" with version 3004
I0904 05:11:23.893212       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase: Bound, bound to: "azuredisk-59/pvc-fbhmk (uid: 999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e)", boundByController: true
... skipping 9 lines ...
I0904 05:11:25.248671       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:11:26.599687       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 0 items received
I0904 05:11:29.073951       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8
I0904 05:11:29.073998       1 pv_controller.go:1436] volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" deleted
I0904 05:11:29.074013       1 pv_controller.go:1284] deleteVolumeOperation [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: success
I0904 05:11:29.083128       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8" with version 3148
I0904 05:11:29.083347       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: phase: Failed, bound to: "azuredisk-59/pvc-g256d (uid: f0fe9061-8d71-4d25-bc3d-145abb9f3ba8)", boundByController: true
I0904 05:11:29.083446       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: volume is bound to claim azuredisk-59/pvc-g256d
I0904 05:11:29.083528       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: claim azuredisk-59/pvc-g256d not found
I0904 05:11:29.083546       1 pv_controller.go:1108] reclaimVolume[pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8]: policy is Delete
I0904 05:11:29.083634       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8[08864958-691e-4a04-a43f-8f5d7d93b76a]]
I0904 05:11:29.083721       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8] started
I0904 05:11:29.084147       1 pv_protection_controller.go:205] Got event on PV pvc-f0fe9061-8d71-4d25-bc3d-145abb9f3ba8
... skipping 148 lines ...
I0904 05:11:43.764287       1 pv_controller.go:1108] reclaimVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: policy is Delete
I0904 05:11:43.764401       1 pv_controller.go:1232] deleteVolumeOperation [pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e] started
I0904 05:11:43.764519       1 pv_controller.go:1753] scheduleOperation[delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]]
I0904 05:11:43.764562       1 pv_controller.go:1764] operation "delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]" is already running, skipping
I0904 05:11:43.767085       1 pv_controller.go:1341] isVolumeReleased[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: volume is released
I0904 05:11:43.767108       1 pv_controller.go:1405] doDeleteVolume [pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]
I0904 05:11:43.767145       1 pv_controller.go:1260] deletion of volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e) since it's in attaching or detaching state
I0904 05:11:43.767169       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: set phase Failed
I0904 05:11:43.767179       1 pv_controller.go:858] updating PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: set phase Failed
I0904 05:11:43.771896       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" with version 3182
I0904 05:11:43.772116       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase: Failed, bound to: "azuredisk-59/pvc-fbhmk (uid: 999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e)", boundByController: true
I0904 05:11:43.772233       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: volume is bound to claim azuredisk-59/pvc-fbhmk
I0904 05:11:43.772401       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: claim azuredisk-59/pvc-fbhmk not found
I0904 05:11:43.772851       1 pv_controller.go:1108] reclaimVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: policy is Delete
I0904 05:11:43.772973       1 pv_controller.go:1753] scheduleOperation[delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]]
I0904 05:11:43.773097       1 pv_controller.go:1764] operation "delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]" is already running, skipping
I0904 05:11:43.772025       1 pv_protection_controller.go:205] Got event on PV pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e
I0904 05:11:43.772792       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" with version 3182
I0904 05:11:43.773589       1 pv_controller.go:879] volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" entered phase "Failed"
I0904 05:11:43.773620       1 pv_controller.go:901] volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e) since it's in attaching or detaching state
E0904 05:11:43.773712       1 goroutinemap.go:150] Operation for "delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]" failed. No retries permitted until 2022-09-04 05:11:44.27367505 +0000 UTC m=+1206.067130457 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e) since it's in attaching or detaching state"
I0904 05:11:43.774233       1 event.go:291] "Event occurred" object="pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e) since it's in attaching or detaching state"
I0904 05:11:47.120864       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e) returned with <nil>
I0904 05:11:47.120922       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e) succeeded
I0904 05:11:47.120934       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e was detached from node:capz-fvszkt-md-0-jvr4s
I0904 05:11:47.120964       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e") on node "capz-fvszkt-md-0-jvr4s" 
I0904 05:11:50.800044       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-tjdcv"
I0904 05:11:51.323229       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="144.003µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43486" resp=200
... skipping 9 lines ...
I0904 05:11:53.715360       1 controller.go:731] It took 3.1401e-05 seconds to finish nodeSyncInternal
I0904 05:11:53.719563       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:11:53.869324       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0904 05:11:53.882809       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-tjdcv ReadyCondition updated. Updating timestamp.
I0904 05:11:53.894324       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:11:53.894511       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" with version 3182
I0904 05:11:53.894561       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase: Failed, bound to: "azuredisk-59/pvc-fbhmk (uid: 999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e)", boundByController: true
I0904 05:11:53.894607       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: volume is bound to claim azuredisk-59/pvc-fbhmk
I0904 05:11:53.894620       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: claim azuredisk-59/pvc-fbhmk not found
I0904 05:11:53.894625       1 pv_controller.go:1108] reclaimVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: policy is Delete
I0904 05:11:53.894638       1 pv_controller.go:1753] scheduleOperation[delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]]
I0904 05:11:53.894659       1 pv_controller.go:1232] deleteVolumeOperation [pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e] started
I0904 05:11:53.988692       1 pv_controller.go:1341] isVolumeReleased[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: volume is released
I0904 05:11:53.988729       1 pv_controller.go:1405] doDeleteVolume [pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]
I0904 05:11:55.283264       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:11:59.150866       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e
I0904 05:11:59.151195       1 pv_controller.go:1436] volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" deleted
I0904 05:11:59.151241       1 pv_controller.go:1284] deleteVolumeOperation [pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: success
I0904 05:11:59.162812       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e" with version 3206
I0904 05:11:59.162860       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: phase: Failed, bound to: "azuredisk-59/pvc-fbhmk (uid: 999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e)", boundByController: true
I0904 05:11:59.162892       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: volume is bound to claim azuredisk-59/pvc-fbhmk
I0904 05:11:59.162921       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: claim azuredisk-59/pvc-fbhmk not found
I0904 05:11:59.162934       1 pv_controller.go:1108] reclaimVolume[pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e]: policy is Delete
I0904 05:11:59.162951       1 pv_controller.go:1753] scheduleOperation[delete-pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e[5fd53fac-eeea-447c-9946-ee69ba71271f]]
I0904 05:11:59.162978       1 pv_controller.go:1232] deleteVolumeOperation [pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e] started
I0904 05:11:59.163163       1 pv_protection_controller.go:205] Got event on PV pvc-999efe9d-7388-4e7f-b6d5-69ef2c8a1f6e
... skipping 205 lines ...
I0904 05:12:09.685594       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" to node "capz-fvszkt-md-0-jvr4s".
I0904 05:12:09.688464       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" to node "capz-fvszkt-md-0-jvr4s".
I0904 05:12:09.710754       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-59
I0904 05:12:09.743337       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" lun 0 to node "capz-fvszkt-md-0-jvr4s".
I0904 05:12:09.743422       1 azure_controller_standard.go:93] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - attach disk(capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) with DiskEncryptionSetID()
I0904 05:12:09.745338       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-59, name default-token-mkpm4, uid 0588e0fa-1124-435f-b9da-86f4bee94af7, event type delete
E0904 05:12:09.763506       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-59/default: secrets "default-token-q2vlq" is forbidden: unable to create new content in namespace azuredisk-59 because it is being terminated
I0904 05:12:09.787931       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-59, name kube-root-ca.crt, uid fbbe1a2c-365d-485a-93a8-d5a389f91fc7, event type delete
I0904 05:12:09.791810       1 publisher.go:181] Finished syncing namespace "azuredisk-59" (4.177361ms)
I0904 05:12:09.857840       1 tokens_controller.go:252] syncServiceAccount(azuredisk-59/default), service account deleted, removing tokens
I0904 05:12:09.857925       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-59" (2.8µs)
I0904 05:12:09.857965       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-59, name default, uid ba821f3b-1256-4726-9031-d79c6759b5ac, event type delete
I0904 05:12:09.883590       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-59, name azuredisk-volume-tester-grrwp.171190227b00fa18, uid 39df4633-d5c4-483a-a6bb-3bcc22b0b834, event type delete
... skipping 232 lines ...
I0904 05:12:45.190514       1 pv_controller.go:1108] reclaimVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: policy is Delete
I0904 05:12:45.190556       1 pv_controller.go:1753] scheduleOperation[delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]]
I0904 05:12:45.190617       1 pv_controller.go:1764] operation "delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]" is already running, skipping
I0904 05:12:45.190693       1 pv_controller.go:1232] deleteVolumeOperation [pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58] started
I0904 05:12:45.234029       1 pv_controller.go:1341] isVolumeReleased[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: volume is released
I0904 05:12:45.234153       1 pv_controller.go:1405] doDeleteVolume [pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]
I0904 05:12:45.262490       1 pv_controller.go:1260] deletion of volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:12:45.262707       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: set phase Failed
I0904 05:12:45.262922       1 pv_controller.go:858] updating PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: set phase Failed
I0904 05:12:45.387694       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" with version 3345
I0904 05:12:45.388088       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: phase: Failed, bound to: "azuredisk-2546/pvc-d47g5 (uid: 67056f9d-b720-4c9b-ae60-f3338e3b8f58)", boundByController: true
I0904 05:12:45.388378       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: volume is bound to claim azuredisk-2546/pvc-d47g5
I0904 05:12:45.388678       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: claim azuredisk-2546/pvc-d47g5 not found
I0904 05:12:45.388898       1 pv_controller.go:1108] reclaimVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: policy is Delete
I0904 05:12:45.388930       1 pv_controller.go:1753] scheduleOperation[delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]]
I0904 05:12:45.388940       1 pv_controller.go:1764] operation "delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]" is already running, skipping
I0904 05:12:45.388965       1 pv_protection_controller.go:205] Got event on PV pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58
I0904 05:12:45.389119       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" with version 3345
I0904 05:12:45.389155       1 pv_controller.go:879] volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" entered phase "Failed"
I0904 05:12:45.389174       1 pv_controller.go:901] volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
E0904 05:12:45.389364       1 goroutinemap.go:150] Operation for "delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]" failed. No retries permitted until 2022-09-04 05:12:45.889198299 +0000 UTC m=+1267.682653706 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:12:45.389739       1 event.go:291] "Event occurred" object="pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:12:45.608485       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 8 items received
I0904 05:12:49.596229       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 10 items received
I0904 05:12:50.825578       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
I0904 05:12:50.825835       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58 to the node "capz-fvszkt-md-0-jvr4s" mounted false
I0904 05:12:50.825970       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 to the node "capz-fvszkt-md-0-jvr4s" mounted false
... skipping 37 lines ...
I0904 05:12:53.898600       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-2546/pvc-gvd2w]: binding to "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975"
I0904 05:12:53.898554       1 pv_controller.go:858] updating PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: set phase Bound
I0904 05:12:53.898642       1 pv_controller.go:861] updating PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: phase Bound already set
I0904 05:12:53.898643       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-2546/pvc-gvd2w]: already bound to "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975"
I0904 05:12:53.898653       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-2546/pvc-gvd2w] status: set phase Bound
I0904 05:12:53.898658       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" with version 3345
I0904 05:12:53.898678       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: phase: Failed, bound to: "azuredisk-2546/pvc-d47g5 (uid: 67056f9d-b720-4c9b-ae60-f3338e3b8f58)", boundByController: true
I0904 05:12:53.898716       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: volume is bound to claim azuredisk-2546/pvc-d47g5
I0904 05:12:53.898720       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-gvd2w] status: phase Bound already set
I0904 05:12:53.898752       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: claim azuredisk-2546/pvc-d47g5 not found
I0904 05:12:53.898751       1 pv_controller.go:1038] volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" bound to claim "azuredisk-2546/pvc-gvd2w"
I0904 05:12:53.898760       1 pv_controller.go:1108] reclaimVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: policy is Delete
I0904 05:12:53.898768       1 pv_controller.go:1039] volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-gvd2w (uid: 9db8f23f-d149-48a7-9c9b-b40ce0d2d975)", boundByController: true
I0904 05:12:53.898774       1 pv_controller.go:1753] scheduleOperation[delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]]
I0904 05:12:53.898781       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-gvd2w" status after binding: phase: Bound, bound to: "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975", bindCompleted: true, boundByController: true
I0904 05:12:53.898801       1 pv_controller.go:1232] deleteVolumeOperation [pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58] started
I0904 05:12:53.905481       1 pv_controller.go:1341] isVolumeReleased[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: volume is released
I0904 05:12:53.905511       1 pv_controller.go:1405] doDeleteVolume [pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]
I0904 05:12:53.905548       1 pv_controller.go:1260] deletion of volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) since it's in attaching or detaching state
I0904 05:12:53.905566       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: set phase Failed
I0904 05:12:53.905576       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: phase Failed already set
E0904 05:12:53.905616       1 goroutinemap.go:150] Operation for "delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]" failed. No retries permitted until 2022-09-04 05:12:54.905585852 +0000 UTC m=+1276.699041259 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58) since it's in attaching or detaching state"
I0904 05:12:55.342792       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:12:57.095861       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 05:12:58.806943       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 9 items received
I0904 05:12:59.746529       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0904 05:13:01.322366       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="73.4µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:42714" resp=200
I0904 05:13:01.899595       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 8 items received
... skipping 30 lines ...
I0904 05:13:08.899836       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: volume is bound to claim azuredisk-2546/pvc-gvd2w
I0904 05:13:08.899856       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: claim azuredisk-2546/pvc-gvd2w found: phase: Bound, bound to: "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975", bindCompleted: true, boundByController: true
I0904 05:13:08.899876       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: all is bound
I0904 05:13:08.899888       1 pv_controller.go:858] updating PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: set phase Bound
I0904 05:13:08.899899       1 pv_controller.go:861] updating PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: phase Bound already set
I0904 05:13:08.899916       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" with version 3345
I0904 05:13:08.899946       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: phase: Failed, bound to: "azuredisk-2546/pvc-d47g5 (uid: 67056f9d-b720-4c9b-ae60-f3338e3b8f58)", boundByController: true
I0904 05:13:08.899971       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: volume is bound to claim azuredisk-2546/pvc-d47g5
I0904 05:13:08.899995       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: claim azuredisk-2546/pvc-d47g5 not found
I0904 05:13:08.900006       1 pv_controller.go:1108] reclaimVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: policy is Delete
I0904 05:13:08.900025       1 pv_controller.go:1753] scheduleOperation[delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]]
I0904 05:13:08.900091       1 pv_controller.go:1232] deleteVolumeOperation [pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58] started
I0904 05:13:08.901528       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-control-plane-wnbrq ReadyCondition updated. Updating timestamp.
... skipping 3 lines ...
I0904 05:13:13.707652       1 gc_controller.go:161] GC'ing orphaned
I0904 05:13:13.708009       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:13:14.234498       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58
I0904 05:13:14.234537       1 pv_controller.go:1436] volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" deleted
I0904 05:13:14.234553       1 pv_controller.go:1284] deleteVolumeOperation [pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: success
I0904 05:13:14.243979       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58" with version 3390
I0904 05:13:14.244130       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: phase: Failed, bound to: "azuredisk-2546/pvc-d47g5 (uid: 67056f9d-b720-4c9b-ae60-f3338e3b8f58)", boundByController: true
I0904 05:13:14.244241       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: volume is bound to claim azuredisk-2546/pvc-d47g5
I0904 05:13:14.244368       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: claim azuredisk-2546/pvc-d47g5 not found
I0904 05:13:14.244469       1 pv_controller.go:1108] reclaimVolume[pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58]: policy is Delete
I0904 05:13:14.244531       1 pv_controller.go:1753] scheduleOperation[delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]]
I0904 05:13:14.244608       1 pv_controller.go:1764] operation "delete-pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58[d3aacd6d-44a1-4fa0-accb-41be23c33b24]" is already running, skipping
I0904 05:13:14.244700       1 pv_protection_controller.go:205] Got event on PV pvc-67056f9d-b720-4c9b-ae60-f3338e3b8f58
... skipping 46 lines ...
I0904 05:13:16.663679       1 pv_controller.go:1108] reclaimVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: policy is Delete
I0904 05:13:16.663691       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]]
I0904 05:13:16.663700       1 pv_controller.go:1764] operation "delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]" is already running, skipping
I0904 05:13:16.663731       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975] started
I0904 05:13:16.673227       1 pv_controller.go:1341] isVolumeReleased[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: volume is released
I0904 05:13:16.673246       1 pv_controller.go:1405] doDeleteVolume [pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]
I0904 05:13:16.673284       1 pv_controller.go:1260] deletion of volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975) since it's in attaching or detaching state
I0904 05:13:16.673302       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: set phase Failed
I0904 05:13:16.673311       1 pv_controller.go:858] updating PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: set phase Failed
I0904 05:13:16.676843       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" with version 3401
I0904 05:13:16.676918       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: phase: Failed, bound to: "azuredisk-2546/pvc-gvd2w (uid: 9db8f23f-d149-48a7-9c9b-b40ce0d2d975)", boundByController: true
I0904 05:13:16.676947       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: volume is bound to claim azuredisk-2546/pvc-gvd2w
I0904 05:13:16.676996       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: claim azuredisk-2546/pvc-gvd2w not found
I0904 05:13:16.677019       1 pv_controller.go:1108] reclaimVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: policy is Delete
I0904 05:13:16.677032       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]]
I0904 05:13:16.677040       1 pv_controller.go:1764] operation "delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]" is already running, skipping
I0904 05:13:16.677085       1 pv_protection_controller.go:205] Got event on PV pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975
I0904 05:13:16.678261       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" with version 3401
I0904 05:13:16.678284       1 pv_controller.go:879] volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" entered phase "Failed"
I0904 05:13:16.678293       1 pv_controller.go:901] volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975) since it's in attaching or detaching state
E0904 05:13:16.678364       1 goroutinemap.go:150] Operation for "delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]" failed. No retries permitted until 2022-09-04 05:13:17.17833387 +0000 UTC m=+1298.971789277 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975) since it's in attaching or detaching state"
I0904 05:13:16.678839       1 event.go:291] "Event occurred" object="pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975) since it's in attaching or detaching state"
I0904 05:13:21.323884       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="104.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:51220" resp=200
I0904 05:13:21.834138       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975) returned with <nil>
I0904 05:13:21.834183       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975) succeeded
I0904 05:13:21.834195       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975 was detached from node:capz-fvszkt-md-0-jvr4s
I0904 05:13:21.834220       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975") on node "capz-fvszkt-md-0-jvr4s" 
I0904 05:13:23.621407       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:13:23.723629       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 05:13:23.900046       1 pv_controller_base.go:528] resyncing PV controller
I0904 05:13:23.900126       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" with version 3401
I0904 05:13:23.900198       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: phase: Failed, bound to: "azuredisk-2546/pvc-gvd2w (uid: 9db8f23f-d149-48a7-9c9b-b40ce0d2d975)", boundByController: true
I0904 05:13:23.900241       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: volume is bound to claim azuredisk-2546/pvc-gvd2w
I0904 05:13:23.900262       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: claim azuredisk-2546/pvc-gvd2w not found
I0904 05:13:23.900272       1 pv_controller.go:1108] reclaimVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: policy is Delete
I0904 05:13:23.900288       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]]
I0904 05:13:23.900383       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975] started
I0904 05:13:23.906430       1 pv_controller.go:1341] isVolumeReleased[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: volume is released
I0904 05:13:23.906481       1 pv_controller.go:1405] doDeleteVolume [pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]
I0904 05:13:25.371569       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:13:29.119484       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975
I0904 05:13:29.119526       1 pv_controller.go:1436] volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" deleted
I0904 05:13:29.119542       1 pv_controller.go:1284] deleteVolumeOperation [pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: success
I0904 05:13:29.129147       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975" with version 3419
I0904 05:13:29.129321       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: phase: Failed, bound to: "azuredisk-2546/pvc-gvd2w (uid: 9db8f23f-d149-48a7-9c9b-b40ce0d2d975)", boundByController: true
I0904 05:13:29.129520       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: volume is bound to claim azuredisk-2546/pvc-gvd2w
I0904 05:13:29.129673       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: claim azuredisk-2546/pvc-gvd2w not found
I0904 05:13:29.129828       1 pv_controller.go:1108] reclaimVolume[pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975]: policy is Delete
I0904 05:13:29.129980       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975[e4cf6a2d-1d0c-48d2-bd55-2e5a25490eab]]
I0904 05:13:29.129470       1 pv_protection_controller.go:205] Got event on PV pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975
I0904 05:13:29.130213       1 pv_protection_controller.go:125] Processing PV pvc-9db8f23f-d149-48a7-9c9b-b40ce0d2d975
... skipping 347 lines ...
I0904 05:13:40.758886       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-a57db482-0718-4276-a9b2-b9801c522e58" lun 0 to node "capz-fvszkt-md-0-jvr4s".
I0904 05:13:40.758950       1 azure_controller_standard.go:93] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - attach disk(capz-fvszkt-dynamic-pvc-a57db482-0718-4276-a9b2-b9801c522e58, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-a57db482-0718-4276-a9b2-b9801c522e58) with DiskEncryptionSetID()
I0904 05:13:40.811556       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 00099401-6d03-4507-b800-8b36742efa32, event type delete
I0904 05:13:40.816082       1 publisher.go:181] Finished syncing namespace "azuredisk-3410" (4.518369ms)
I0904 05:13:40.885436       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-88cxr, uid aab35cfb-133f-476f-87c7-914959e10e9e, event type delete
I0904 05:13:40.991091       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-fvszkt-md-0-jvr4s"
E0904 05:13:41.087543       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-fh4bl" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0904 05:13:41.209551       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0904 05:13:41.209826       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (3.7µs)
I0904 05:13:41.210007       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid 1ad46bf2-4a02-430d-8d42-d290d9902b7b, event type delete
I0904 05:13:41.226629       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (1.2µs)
I0904 05:13:41.226812       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3410, estimate: 0, errors: <nil>
I0904 05:13:41.250610       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3410" (803.293895ms)
... skipping 289 lines ...
I0904 05:14:18.686190       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:14:18.686243       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:14:18.686257       1 pv_controller.go:1764] operation "delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]" is already running, skipping
I0904 05:14:18.686064       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd] started
I0904 05:14:18.689066       1 pv_controller.go:1341] isVolumeReleased[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is released
I0904 05:14:18.689090       1 pv_controller.go:1405] doDeleteVolume [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]
I0904 05:14:18.710843       1 pv_controller.go:1260] deletion of volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:14:18.710874       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: set phase Failed
I0904 05:14:18.710886       1 pv_controller.go:858] updating PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: set phase Failed
I0904 05:14:18.716978       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3596
I0904 05:14:18.717019       1 pv_controller.go:879] volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" entered phase "Failed"
I0904 05:14:18.717062       1 pv_controller.go:901] volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
E0904 05:14:18.717272       1 goroutinemap.go:150] Operation for "delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]" failed. No retries permitted until 2022-09-04 05:14:19.217092891 +0000 UTC m=+1361.010548198 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:14:18.717407       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3596
I0904 05:14:18.717551       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase: Failed, bound to: "azuredisk-8582/pvc-lcfbw (uid: 1f4a116b-e2c9-4912-8b16-6db6638119dd)", boundByController: true
I0904 05:14:18.717651       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is bound to claim azuredisk-8582/pvc-lcfbw
I0904 05:14:18.717762       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: claim azuredisk-8582/pvc-lcfbw not found
I0904 05:14:18.717843       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:14:18.717865       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:14:18.717876       1 pv_controller.go:1766] operation "delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]" postponed due to exponential backoff
I0904 05:14:18.717925       1 pv_protection_controller.go:205] Got event on PV pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd
... skipping 77 lines ...
I0904 05:14:23.910644       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-49tqc" status after binding: phase: Bound, bound to: "pvc-a57db482-0718-4276-a9b2-b9801c522e58", bindCompleted: true, boundByController: true
I0904 05:14:23.911156       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: claim azuredisk-8582/pvc-ks7lw found: phase: Bound, bound to: "pvc-656f3449-d297-44ee-89d8-952e1626cb8c", bindCompleted: true, boundByController: true
I0904 05:14:23.914351       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: all is bound
I0904 05:14:23.914597       1 pv_controller.go:858] updating PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: set phase Bound
I0904 05:14:23.914828       1 pv_controller.go:861] updating PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: phase Bound already set
I0904 05:14:23.915072       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3596
I0904 05:14:23.915260       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase: Failed, bound to: "azuredisk-8582/pvc-lcfbw (uid: 1f4a116b-e2c9-4912-8b16-6db6638119dd)", boundByController: true
I0904 05:14:23.914302       1 node_lifecycle_controller.go:1047] Node capz-fvszkt-md-0-jvr4s ReadyCondition updated. Updating timestamp.
I0904 05:14:23.915652       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is bound to claim azuredisk-8582/pvc-lcfbw
I0904 05:14:23.915938       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: claim azuredisk-8582/pvc-lcfbw not found
I0904 05:14:23.916095       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:14:23.916291       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:14:23.916578       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd] started
I0904 05:14:23.923139       1 pv_controller.go:1341] isVolumeReleased[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is released
I0904 05:14:23.924574       1 pv_controller.go:1405] doDeleteVolume [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]
I0904 05:14:23.971337       1 pv_controller.go:1260] deletion of volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:14:23.971369       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: set phase Failed
I0904 05:14:23.971380       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase Failed already set
E0904 05:14:23.971454       1 goroutinemap.go:150] Operation for "delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]" failed. No retries permitted until 2022-09-04 05:14:24.97139038 +0000 UTC m=+1366.764845787 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:14:25.440873       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:14:27.994460       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0904 05:14:31.326919       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="91.801µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:57398" resp=200
I0904 05:14:33.710298       1 gc_controller.go:161] GC'ing orphaned
I0904 05:14:33.710334       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 05:14:36.531662       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-a57db482-0718-4276-a9b2-b9801c522e58) returned with <nil>
... skipping 48 lines ...
I0904 05:14:38.906808       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: volume is bound to claim azuredisk-8582/pvc-ks7lw
I0904 05:14:38.906990       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: claim azuredisk-8582/pvc-ks7lw found: phase: Bound, bound to: "pvc-656f3449-d297-44ee-89d8-952e1626cb8c", bindCompleted: true, boundByController: true
I0904 05:14:38.907069       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: all is bound
I0904 05:14:38.907087       1 pv_controller.go:858] updating PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: set phase Bound
I0904 05:14:38.907097       1 pv_controller.go:861] updating PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: phase Bound already set
I0904 05:14:38.907129       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3596
I0904 05:14:38.907151       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase: Failed, bound to: "azuredisk-8582/pvc-lcfbw (uid: 1f4a116b-e2c9-4912-8b16-6db6638119dd)", boundByController: true
I0904 05:14:38.907178       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is bound to claim azuredisk-8582/pvc-lcfbw
I0904 05:14:38.907199       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: claim azuredisk-8582/pvc-lcfbw not found
I0904 05:14:38.907207       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:14:38.907222       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:14:38.907290       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd] started
I0904 05:14:38.916736       1 pv_controller.go:1341] isVolumeReleased[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is released
I0904 05:14:38.916776       1 pv_controller.go:1405] doDeleteVolume [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]
I0904 05:14:38.949126       1 pv_controller.go:1260] deletion of volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted
I0904 05:14:38.949172       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: set phase Failed
I0904 05:14:38.949185       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase Failed already set
E0904 05:14:38.949460       1 goroutinemap.go:150] Operation for "delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]" failed. No retries permitted until 2022-09-04 05:14:40.949262633 +0000 UTC m=+1382.742718040 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/virtualMachines/capz-fvszkt-md-0-jvr4s), could not be deleted"
I0904 05:14:41.323148       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="64.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:55262" resp=200
I0904 05:14:51.324408       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="146.702µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:58292" resp=200
I0904 05:14:51.953148       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-656f3449-d297-44ee-89d8-952e1626cb8c) returned with <nil>
I0904 05:14:51.953215       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-656f3449-d297-44ee-89d8-952e1626cb8c) succeeded
I0904 05:14:51.953232       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-656f3449-d297-44ee-89d8-952e1626cb8c was detached from node:capz-fvszkt-md-0-jvr4s
I0904 05:14:51.953261       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-656f3449-d297-44ee-89d8-952e1626cb8c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-656f3449-d297-44ee-89d8-952e1626cb8c") on node "capz-fvszkt-md-0-jvr4s" 
... skipping 48 lines ...
I0904 05:14:53.907749       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-ks7lw] status: set phase Bound
I0904 05:14:53.907788       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-ks7lw] status: phase Bound already set
I0904 05:14:53.907801       1 pv_controller.go:1038] volume "pvc-656f3449-d297-44ee-89d8-952e1626cb8c" bound to claim "azuredisk-8582/pvc-ks7lw"
I0904 05:14:53.907827       1 pv_controller.go:1039] volume "pvc-656f3449-d297-44ee-89d8-952e1626cb8c" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-ks7lw (uid: 656f3449-d297-44ee-89d8-952e1626cb8c)", boundByController: true
I0904 05:14:53.907859       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-ks7lw" status after binding: phase: Bound, bound to: "pvc-656f3449-d297-44ee-89d8-952e1626cb8c", bindCompleted: true, boundByController: true
I0904 05:14:53.906458       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3596
I0904 05:14:53.907886       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase: Failed, bound to: "azuredisk-8582/pvc-lcfbw (uid: 1f4a116b-e2c9-4912-8b16-6db6638119dd)", boundByController: true
I0904 05:14:53.907908       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is bound to claim azuredisk-8582/pvc-lcfbw
I0904 05:14:53.907928       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: claim azuredisk-8582/pvc-lcfbw not found
I0904 05:14:53.907940       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:14:53.907957       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:14:53.907994       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd] started
I0904 05:14:53.929272       1 pv_controller.go:1341] isVolumeReleased[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is released
I0904 05:14:53.929290       1 pv_controller.go:1405] doDeleteVolume [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]
I0904 05:14:53.929324       1 pv_controller.go:1260] deletion of volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) since it's in attaching or detaching state
I0904 05:14:53.929337       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: set phase Failed
I0904 05:14:53.929370       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase Failed already set
E0904 05:14:53.929407       1 goroutinemap.go:150] Operation for "delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]" failed. No retries permitted until 2022-09-04 05:14:57.929378546 +0000 UTC m=+1399.722833853 (durationBeforeRetry 4s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) since it's in attaching or detaching state"
I0904 05:14:55.468171       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 05:15:01.335909       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="150.802µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43466" resp=200
I0904 05:15:02.591534       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 21 items received
I0904 05:15:07.142292       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0904 05:15:07.333177       1 azure_controller_standard.go:184] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) returned with <nil>
I0904 05:15:07.333243       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd) succeeded
... skipping 23 lines ...
I0904 05:15:08.906683       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: volume is bound to claim azuredisk-8582/pvc-ks7lw
I0904 05:15:08.906726       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: claim azuredisk-8582/pvc-ks7lw found: phase: Bound, bound to: "pvc-656f3449-d297-44ee-89d8-952e1626cb8c", bindCompleted: true, boundByController: true
I0904 05:15:08.906743       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: all is bound
I0904 05:15:08.906756       1 pv_controller.go:858] updating PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: set phase Bound
I0904 05:15:08.906767       1 pv_controller.go:861] updating PersistentVolume[pvc-656f3449-d297-44ee-89d8-952e1626cb8c]: phase Bound already set
I0904 05:15:08.906791       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3596
I0904 05:15:08.906819       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase: Failed, bound to: "azuredisk-8582/pvc-lcfbw (uid: 1f4a116b-e2c9-4912-8b16-6db6638119dd)", boundByController: true
I0904 05:15:08.906858       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is bound to claim azuredisk-8582/pvc-lcfbw
I0904 05:15:08.906884       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: claim azuredisk-8582/pvc-lcfbw not found
I0904 05:15:08.906901       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:15:08.906919       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:15:08.906795       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8582/pvc-49tqc]: already bound to "pvc-a57db482-0718-4276-a9b2-b9801c522e58"
I0904 05:15:08.906985       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-49tqc] status: set phase Bound
... skipping 32 lines ...
I0904 05:15:13.717065       1 controller.go:731] It took 0.000398106 seconds to finish nodeSyncInternal
I0904 05:15:14.172672       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd
I0904 05:15:14.172718       1 pv_controller.go:1436] volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" deleted
I0904 05:15:14.172759       1 pv_controller.go:1284] deleteVolumeOperation [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: success
I0904 05:15:14.186659       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd" with version 3677
I0904 05:15:14.186968       1 pv_protection_controller.go:205] Got event on PV pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd
I0904 05:15:14.187122       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: phase: Failed, bound to: "azuredisk-8582/pvc-lcfbw (uid: 1f4a116b-e2c9-4912-8b16-6db6638119dd)", boundByController: true
I0904 05:15:14.187096       1 pv_protection_controller.go:125] Processing PV pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd
I0904 05:15:14.187311       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: volume is bound to claim azuredisk-8582/pvc-lcfbw
I0904 05:15:14.187498       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: claim azuredisk-8582/pvc-lcfbw not found
I0904 05:15:14.187530       1 pv_controller.go:1108] reclaimVolume[pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd]: policy is Delete
I0904 05:15:14.187889       1 pv_controller.go:1753] scheduleOperation[delete-pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd[26875006-eac2-4953-bd7f-4d8aa1b4aa98]]
I0904 05:15:14.187937       1 pv_controller.go:1232] deleteVolumeOperation [pvc-1f4a116b-e2c9-4912-8b16-6db6638119dd] started
... skipping 180 lines ...
I0904 05:15:41.589631       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (46.091574ms)
I0904 05:15:41.589858       1 publisher.go:181] Finished syncing namespace "azuredisk-4547" (45.881072ms)
I0904 05:15:42.050653       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0904 05:15:42.099114       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-4vmb5, uid 7f1004c9-16d6-46c7-b893-f1e9027176f3, event type delete
I0904 05:15:42.183080       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8582, name kube-root-ca.crt, uid 12ab579e-b3f3-43dc-9ec2-c50236e68ea2, event type delete
I0904 05:15:42.186833       1 publisher.go:181] Finished syncing namespace "azuredisk-8582" (4.140751ms)
E0904 05:15:42.188954       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-647wq" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0904 05:15:42.255093       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-587c4.171190524a3c3948, uid cdd39ff6-1b8b-4a63-9150-60e0871c0a98, event type delete
I0904 05:15:42.259529       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-587c4.1711905396ec6946, uid 4b22ad6a-be49-4d00-a46e-6b5e0f75f423, event type delete
I0904 05:15:42.265803       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-587c4.171190560318dbf8, uid 58df168f-49a5-48f3-85d3-3331117afb33, event type delete
I0904 05:15:42.269940       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-587c4.17119058738e5e17, uid df13897d-2950-4b06-90a8-bb33cb5510c3, event type delete
I0904 05:15:42.274083       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-587c4.1711905a47072551, uid 8ddf771a-2c25-4038-bf1e-23c1bf09c7c8, event type delete
I0904 05:15:42.278695       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-587c4.1711905a4ab87e28, uid abf2f14c-e783-4d95-9318-64d8523f9f20, event type delete
... skipping 12 lines ...
I0904 05:15:42.494200       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (447.681679ms)
I0904 05:15:42.986344       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (4µs)
I0904 05:15:43.132609       1 publisher.go:181] Finished syncing namespace "azuredisk-7051" (37.259364ms)
I0904 05:15:43.133547       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (38.620281ms)
I0904 05:15:43.499383       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0904 05:15:43.585917       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-g2tvh, uid 35e5db8b-7dc7-4306-adaa-5c167419632a, event type delete
E0904 05:15:43.649946       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-4jc75" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0904 05:15:43.703629       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 250e2c3b-3518-4334-87d3-35aeede1b223, event type delete
I0904 05:15:43.708739       1 publisher.go:181] Finished syncing namespace "azuredisk-7726" (5.678271ms)
I0904 05:15:43.781977       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0904 05:15:43.783168       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (2.6µs)
I0904 05:15:43.783194       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 675b1549-0c2d-4de0-96d0-f7bcaaccd2cf, event type delete
I0904 05:15:44.047776       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (3.4µs)
... skipping 31 lines ...
I0904 05:15:44.505191       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7051/pvc-5x9bk" with version 3793
I0904 05:15:44.507651       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-fvszkt-dynamic-pvc-1cc6363f-ba54-4ede-8517-dacc110024d8 StorageAccountType:Standard_LRS Size:10
I0904 05:15:44.981884       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3086
I0904 05:15:45.083215       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid f67d4a20-5bf1-4ce3-9dba-f10c8e416e2d, event type delete
I0904 05:15:45.086806       1 publisher.go:181] Finished syncing namespace "azuredisk-3086" (3.94325ms)
I0904 05:15:45.120029       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-pb9xx, uid abc5288a-ab18-4b6a-a27c-5aaaf2e28678, event type delete
E0904 05:15:45.141510       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-sn2gz" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0904 05:15:45.159049       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0904 05:15:45.159269       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2.7µs)
I0904 05:15:45.159294       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 848ae0f7-3389-4b69-a727-33d5aed9d8b9, event type delete
I0904 05:15:45.256500       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.1µs)
I0904 05:15:45.256516       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0904 05:15:45.269984       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (292.271742ms)
I0904 05:15:46.442292       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0904 05:15:46.536674       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-9kc85, uid 5fed1611-465a-4213-8f78-b591c9ae5f42, event type delete
E0904 05:15:46.562377       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-dgktb" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0904 05:15:46.625388       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid 377d2fa5-0d84-44ce-a782-0fb5ef201b62, event type delete
I0904 05:15:46.634936       1 publisher.go:181] Finished syncing namespace "azuredisk-1387" (9.775845ms)
I0904 05:15:46.663654       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0904 05:15:46.663715       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.6µs)
I0904 05:15:46.663936       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid 4499f1c9-f1ea-4fa6-8b20-f0b081bea979, event type delete
I0904 05:15:46.671978       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (17.801µs)
... skipping 70 lines ...
I0904 05:15:47.617436       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1cc6363f-ba54-4ede-8517-dacc110024d8" lun 0 to node "capz-fvszkt-md-0-jvr4s".
I0904 05:15:47.617489       1 azure_controller_standard.go:93] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-jvr4s) - attach disk(capz-fvszkt-dynamic-pvc-1cc6363f-ba54-4ede-8517-dacc110024d8, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-1cc6363f-ba54-4ede-8517-dacc110024d8) with DiskEncryptionSetID()
I0904 05:15:47.991604       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4547
I0904 05:15:48.018252       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4547, name kube-root-ca.crt, uid 5ce925dd-14ee-4eb5-9fb5-d8535e8822dd, event type delete
I0904 05:15:48.026234       1 publisher.go:181] Finished syncing namespace "azuredisk-4547" (8.307023ms)
I0904 05:15:48.050239       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4547, name default-token-nwq54, uid 6bba5a42-3a80-462a-91e1-83e476607be9, event type delete
E0904 05:15:48.065514       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4547/default: secrets "default-token-cmpnf" is forbidden: unable to create new content in namespace azuredisk-4547 because it is being terminated
I0904 05:15:48.238976       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4547/default), service account deleted, removing tokens
I0904 05:15:48.239243       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (3µs)
I0904 05:15:48.239646       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4547, name default, uid f3063489-69bd-4869-9996-6b9b8e01f6d9, event type delete
I0904 05:15:48.251583       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (1.5µs)
I0904 05:15:48.251935       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4547, estimate: 0, errors: <nil>
I0904 05:15:48.265186       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4547" (277.821419ms)
... skipping 510 lines ...
I0904 05:17:41.470267       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-769c6f6f-80fe-429d-8645-fdd49c85566d" lun 0 to node "capz-fvszkt-md-0-tjdcv".
I0904 05:17:41.470324       1 azure_controller_standard.go:93] azureDisk - update(capz-fvszkt): vm(capz-fvszkt-md-0-tjdcv) - attach disk(capz-fvszkt-dynamic-pvc-769c6f6f-80fe-429d-8645-fdd49c85566d, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-769c6f6f-80fe-429d-8645-fdd49c85566d) with DiskEncryptionSetID()
I0904 05:17:41.808301       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0904 05:17:41.919274       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-w5s7k, uid b000533b-c3e7-47f3-8032-7117eefb91a7, event type delete
I0904 05:17:42.038295       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid a5bb80f8-ce25-455e-91f1-0d1cd0d1c69f, event type delete
I0904 05:17:42.042321       1 publisher.go:181] Finished syncing namespace "azuredisk-7051" (4.609169ms)
E0904 05:17:42.044059       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-zqkcn" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0904 05:17:42.056414       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jwfq9.1711906fd772f699, uid 04578a00-55e1-413c-85a6-cc596050201c, event type delete
I0904 05:17:42.087326       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jwfq9.17119072433e9dee, uid 44b74b49-f3d7-4b96-833e-015952dbc7ac, event type delete
I0904 05:17:42.198984       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jwfq9.1711907433319526, uid 65d9b58f-2637-41c7-a6a4-1c2db7a70d94, event type delete
I0904 05:17:42.289088       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jwfq9.1711907435fac8d1, uid bdcd0e6c-992c-4c53-b130-bb92ac829b09, event type delete
I0904 05:17:42.393439       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jwfq9.171190743d07feda, uid 3ae155c5-c3c0-4373-adde-dbe617535b52, event type delete
I0904 05:17:42.437411       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-jwfq9.171190751d5e7da2, uid e74cbbf7-dc54-4c72-b1f7-05280b41135b, event type delete
... skipping 287 lines ...
I0904 05:18:39.915107       1 stateful_set_control.go:368] StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff has 1 unhealthy Pods starting with azuredisk-volume-tester-p97ff-0
I0904 05:18:39.915140       1 stateful_set_control.go:443] StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff is waiting for Pod azuredisk-volume-tester-p97ff-0 to be Running and Ready
I0904 05:18:39.915149       1 stateful_set_control.go:113] StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff pod status replicas=1 ready=0 current=1 updated=1
I0904 05:18:39.915157       1 stateful_set_control.go:121] StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff revisions current=azuredisk-volume-tester-p97ff-7dc4c9c847 update=azuredisk-volume-tester-p97ff-7dc4c9c847
I0904 05:18:39.915166       1 stateful_set.go:453] Successfully synced StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff successful
I0904 05:18:39.915172       1 stateful_set.go:410] Finished syncing statefulset "azuredisk-9183/azuredisk-volume-tester-p97ff" (3.588153ms)
W0904 05:18:39.927177       1 reconciler.go:344] Multi-Attach error for volume "pvc-769c6f6f-80fe-429d-8645-fdd49c85566d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-fvszkt/providers/Microsoft.Compute/disks/capz-fvszkt-dynamic-pvc-769c6f6f-80fe-429d-8645-fdd49c85566d") from node "capz-fvszkt-md-0-jvr4s" Volume is already exclusively attached to node capz-fvszkt-md-0-tjdcv and can't be attached to another
I0904 05:18:39.927452       1 event.go:291] "Event occurred" object="azuredisk-9183/azuredisk-volume-tester-p97ff-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-769c6f6f-80fe-429d-8645-fdd49c85566d\" Volume is already exclusively attached to one node and can't be attached to another"
I0904 05:18:40.045261       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-p97ff-0"
I0904 05:18:40.045594       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-p97ff-0, PodDisruptionBudget controller will avoid syncing.
I0904 05:18:40.047671       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-p97ff-0"
I0904 05:18:40.047380       1 stateful_set.go:222] Pod azuredisk-volume-tester-p97ff-0 updated, objectMeta {Name:azuredisk-volume-tester-p97ff-0 GenerateName:azuredisk-volume-tester-p97ff- Namespace:azuredisk-9183 SelfLink: UID:566e7b8d-00a6-44c7-91eb-cf569a62ef90 ResourceVersion:4157 Generation:0 CreationTimestamp:2022-09-04 05:18:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-907430288210826867 controller-revision-hash:azuredisk-volume-tester-p97ff-7dc4c9c847 statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-p97ff-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-p97ff UID:8afb20ac-1814-4f9f-9d4d-e44d8e486c1d Controller:0xc0008ac15e BlockOwnerDeletion:0xc0008ac15f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 05:18:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8afb20ac-1814-4f9f-9d4d-e44d8e486c1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]} -> {Name:azuredisk-volume-tester-p97ff-0 GenerateName:azuredisk-volume-tester-p97ff- Namespace:azuredisk-9183 SelfLink: UID:566e7b8d-00a6-44c7-91eb-cf569a62ef90 ResourceVersion:4161 Generation:0 CreationTimestamp:2022-09-04 05:18:39 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-907430288210826867 controller-revision-hash:azuredisk-volume-tester-p97ff-7dc4c9c847 statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-p97ff-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-p97ff UID:8afb20ac-1814-4f9f-9d4d-e44d8e486c1d Controller:0xc000cb947e BlockOwnerDeletion:0xc000cb947f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 05:18:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8afb20ac-1814-4f9f-9d4d-e44d8e486c1d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 05:18:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]}.
I0904 05:18:40.049349       1 stateful_set.go:448] Syncing StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff with 1 pods
I0904 05:18:40.052794       1 stateful_set_control.go:368] StatefulSet azuredisk-9183/azuredisk-volume-tester-p97ff has 1 unhealthy Pods starting with azuredisk-volume-tester-p97ff-0
... skipping 145 lines ...
I0904 05:19:21.609137       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 8 items received
I0904 05:19:22.739742       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4415" (3.8µs)
I0904 05:19:22.864882       1 publisher.go:181] Finished syncing namespace "azuredisk-6720" (18.129166ms)
I0904 05:19:22.868541       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (22.134124ms)
I0904 05:19:23.398802       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9183
I0904 05:19:23.427300       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9183, name default-token-sqlhs, uid f8508c88-cc1e-4e18-b81a-246487cd9594, event type delete
E0904 05:19:23.493779       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9183/default: secrets "default-token-7wdrd" is forbidden: unable to create new content in namespace azuredisk-9183 because it is being terminated
I0904 05:19:23.597426       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0" with version 4267
I0904 05:19:23.597471       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0]: phase: Bound, bound to: "pvc-769c6f6f-80fe-429d-8645-fdd49c85566d", bindCompleted: true, boundByController: true
I0904 05:19:23.597510       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0]: volume "pvc-769c6f6f-80fe-429d-8645-fdd49c85566d" found: phase: Bound, bound to: "azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0 (uid: 769c6f6f-80fe-429d-8645-fdd49c85566d)", boundByController: true
I0904 05:19:23.597522       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0]: claim is already correctly bound
I0904 05:19:23.597534       1 pv_controller.go:1012] binding volume "pvc-769c6f6f-80fe-429d-8645-fdd49c85566d" to claim "azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0"
I0904 05:19:23.597547       1 pv_controller.go:910] updating PersistentVolume[pvc-769c6f6f-80fe-429d-8645-fdd49c85566d]: binding to "azuredisk-9183/pvc-azuredisk-volume-tester-p97ff-0"
... skipping 101 lines ...
I0904 05:19:26.950294       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (43.204733ms)
I0904 05:19:27.745554       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4415
I0904 05:19:27.770125       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (2.9µs)
I0904 05:19:27.840838       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4415, name default-token-pmhll, uid 630c7f6f-e652-44eb-8621-64f94b579824, event type delete
I0904 05:19:27.946730       1 publisher.go:181] Finished syncing namespace "azuredisk-8652" (57.159338ms)
I0904 05:19:27.946799       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8652" (57.249239ms)
E0904 05:19:28.069505       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4415/default: secrets "default-token-77q9x" is forbidden: unable to create new content in namespace azuredisk-4415 because it is being terminated
I0904 05:19:28.139113       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4415/default), service account deleted, removing tokens
I0904 05:19:28.139428       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4415, name default, uid ac9bcd6a-d367-418a-815f-4c48310c7346, event type delete
I0904 05:19:28.139435       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4415" (2.7µs)
I0904 05:19:28.303798       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4415, name kube-root-ca.crt, uid 08b83bfe-162d-4681-accb-b1826d635a0d, event type delete
I0904 05:19:28.306938       1 publisher.go:181] Finished syncing namespace "azuredisk-4415" (3.496551ms)
I0904 05:19:28.370125       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4415" (3.5µs)
... skipping 2 lines ...
I0904 05:19:28.748375       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0904 05:19:28.759115       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8652" (3.1µs)
I0904 05:19:28.843027       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-6720
I0904 05:19:28.893987       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-6720, name default-token-fsq5w, uid 07c6eb5a-7a5a-4d22-b62f-378d7fd7fa9c, event type delete
I0904 05:19:28.948767       1 publisher.go:181] Finished syncing namespace "azuredisk-8470" (77.076229ms)
I0904 05:19:28.948981       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8470" (77.602936ms)
E0904 05:19:29.058552       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6720/default: secrets "default-token-dqkvj" is forbidden: unable to create new content in namespace azuredisk-6720 because it is being terminated
I0904 05:19:29.139307       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9183
I0904 05:19:29.174862       1 tokens_controller.go:252] syncServiceAccount(azuredisk-6720/default), service account deleted, removing tokens
I0904 05:19:29.174930       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (2.6µs)
I0904 05:19:29.176211       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-6720, name default, uid 90db73ec-a080-4e02-8557-74202309601b, event type delete
I0904 05:19:29.344048       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-6720, name kube-root-ca.crt, uid e74f68f3-6594-412f-9b00-080a25f609a4, event type delete
I0904 05:19:29.347577       1 publisher.go:181] Finished syncing namespace "azuredisk-6720" (3.834156ms)
... skipping 15 lines ...
I0904 05:19:30.095091       1 publisher.go:181] Finished syncing namespace "azuredisk-4162" (16.980748ms)
I0904 05:19:30.234619       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4162" (3.6µs)
I0904 05:19:30.235166       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4162, estimate: 0, errors: <nil>
I0904 05:19:30.248382       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4162" (406.096449ms)
I0904 05:19:30.820297       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-6200
I0904 05:19:30.870770       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-6200, name default-token-lr5wc, uid 3fb652cc-e2fe-4eaa-99f4-23086507fcef, event type delete
E0904 05:19:30.885425       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6200/default: secrets "default-token-n6k5q" is forbidden: unable to create new content in namespace azuredisk-6200 because it is being terminated
I0904 05:19:30.919974       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-6200, name kube-root-ca.crt, uid 11176175-4fa0-4e3d-b0ac-77107953b236, event type delete
I0904 05:19:30.922813       1 publisher.go:181] Finished syncing namespace "azuredisk-6200" (3.123246ms)
I0904 05:19:30.991779       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6200" (3.3µs)
I0904 05:19:30.992143       1 tokens_controller.go:252] syncServiceAccount(azuredisk-6200/default), service account deleted, removing tokens
I0904 05:19:30.992468       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-6200, name default, uid 8be38691-87e8-4902-b675-5df8042d5c25, event type delete
I0904 05:19:31.075780       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6200" (3.5µs)
I0904 05:19:31.077474       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-6200, estimate: 0, errors: <nil>
I0904 05:19:31.087538       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-6200" (271.349474ms)
I0904 05:19:31.323922       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="76.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:34912" resp=200
I0904 05:19:31.564685       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1166
I0904 05:19:31.564729       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1166" (70.301µs)
I0904 05:19:31.808805       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5320
I0904 05:19:31.860940       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5320, name default-token-lh8vt, uid a9600b05-11cd-4ffb-9e3d-3e037956f8fc, event type delete
E0904 05:19:31.875921       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5320/default: secrets "default-token-bgzxg" is forbidden: unable to create new content in namespace azuredisk-5320 because it is being terminated
I0904 05:19:31.957165       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5320, name kube-root-ca.crt, uid 11ee9c79-cf60-4e2a-870d-40537968b2b5, event type delete
I0904 05:19:31.961732       1 publisher.go:181] Finished syncing namespace "azuredisk-5320" (4.82767ms)
I0904 05:19:31.989531       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5320/default), service account deleted, removing tokens
I0904 05:19:31.989662       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5320" (4.201µs)
I0904 05:19:31.989763       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5320, name default, uid e0b2722c-e47f-4443-a76d-7be42d96bd7b, event type delete
I0904 05:19:32.009823       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5320, estimate: 0, errors: <nil>
... skipping 11 lines ...
I0904 05:19:33.016854       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9103" (246.143606ms)
2022/09/04 05:19:33 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1344.594 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-gmrtz, container manager
STEP: Dumping workload cluster default/capz-fvszkt logs
Sep  4 05:21:05.834: INFO: Collecting logs for Linux node capz-fvszkt-control-plane-wnbrq in cluster capz-fvszkt in namespace default

Sep  4 05:22:05.836: INFO: Collecting boot logs for AzureMachine capz-fvszkt-control-plane-wnbrq

Failed to get logs for machine capz-fvszkt-control-plane-mqpld, cluster default/capz-fvszkt: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 05:22:07.005: INFO: Collecting logs for Linux node capz-fvszkt-md-0-tjdcv in cluster capz-fvszkt in namespace default

Sep  4 05:23:07.008: INFO: Collecting boot logs for AzureMachine capz-fvszkt-md-0-tjdcv

Failed to get logs for machine capz-fvszkt-md-0-7558fb47d8-bckrr, cluster default/capz-fvszkt: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 05:23:07.497: INFO: Collecting logs for Linux node capz-fvszkt-md-0-jvr4s in cluster capz-fvszkt in namespace default

Sep  4 05:24:07.499: INFO: Collecting boot logs for AzureMachine capz-fvszkt-md-0-jvr4s

Failed to get logs for machine capz-fvszkt-md-0-7558fb47d8-c69gb, cluster default/capz-fvszkt: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-fvszkt kube-system pod logs
STEP: Fetching kube-system pod logs took 1.136965603s
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-fvszkt-control-plane-wnbrq
STEP: Collecting events for Pod kube-system/metrics-server-8c95fb79b-fsf96
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-fvszkt-control-plane-wnbrq, container kube-scheduler
STEP: Dumping workload cluster default/capz-fvszkt Azure activity log
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-zfj5n
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-fvszkt-control-plane-wnbrq, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-node-2hgr8, container calico-node
STEP: failed to find events of Pod "kube-scheduler-capz-fvszkt-control-plane-wnbrq"
STEP: Collecting events for Pod kube-system/calico-node-2hgr8
STEP: Creating log watcher for controller kube-system/calico-node-r5xbn, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-r5xbn
STEP: Creating log watcher for controller kube-system/calico-node-vjqgs, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-vjqgs
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-9zmvq, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-9zmvq
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-ncsxl, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-ncsxl
STEP: Creating log watcher for controller kube-system/etcd-capz-fvszkt-control-plane-wnbrq, container etcd
STEP: Collecting events for Pod kube-system/etcd-capz-fvszkt-control-plane-wnbrq
STEP: Creating log watcher for controller kube-system/kube-proxy-n6gwt, container kube-proxy
STEP: failed to find events of Pod "etcd-capz-fvszkt-control-plane-wnbrq"
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-fvszkt-control-plane-wnbrq
STEP: failed to find events of Pod "kube-apiserver-capz-fvszkt-control-plane-wnbrq"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-fvszkt-control-plane-wnbrq, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-fvszkt-control-plane-wnbrq
STEP: failed to find events of Pod "kube-controller-manager-capz-fvszkt-control-plane-wnbrq"
STEP: Creating log watcher for controller kube-system/kube-proxy-d552q, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-d552q
STEP: Collecting events for Pod kube-system/kube-proxy-wt8f6
STEP: Collecting events for Pod kube-system/kube-proxy-n6gwt
STEP: Creating log watcher for controller kube-system/kube-proxy-wt8f6, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-zfj5n, container calico-kube-controllers
... skipping 19 lines ...