This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-04 20:01
Elapsed50m42s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 635 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-7sh698-kubeconfig; do sleep 1; done"
capz-7sh698-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-7sh698-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-7sh698-control-plane-sh4jc   NotReady   <none>   1s    v1.21.15-rc.0.4+2fef630dd216dd
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-7sh698-control-plane-sh4jc condition met
node/capz-7sh698-mp-0000000 condition met
... skipping 46 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:21:04.306: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-cshgg" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  4 20:21:04.342: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 36.263661ms
Sep  4 20:21:06.377: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070515075s
Sep  4 20:21:08.411: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105020451s
Sep  4 20:21:10.446: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140148788s
Sep  4 20:21:12.480: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.174304927s
Sep  4 20:21:14.517: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.210557125s
... skipping 3 lines ...
Sep  4 20:21:22.664: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.357508603s
Sep  4 20:21:24.699: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.392988407s
Sep  4 20:21:26.734: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.428295847s
Sep  4 20:21:28.772: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Pending", Reason="", readiness=false. Elapsed: 24.465693486s
Sep  4 20:21:30.808: INFO: Pod "azuredisk-volume-tester-cshgg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.502239452s
STEP: Saw pod success
Sep  4 20:21:30.808: INFO: Pod "azuredisk-volume-tester-cshgg" satisfied condition "Succeeded or Failed"
Sep  4 20:21:30.808: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-cshgg"
Sep  4 20:21:30.854: INFO: Pod azuredisk-volume-tester-cshgg has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-cshgg in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:21:30.965: INFO: deleting PVC "azuredisk-8081"/"pvc-pjmjm"
Sep  4 20:21:30.965: INFO: Deleting PersistentVolumeClaim "pvc-pjmjm"
STEP: waiting for claim's PV "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" to be deleted
Sep  4 20:21:31.015: INFO: Waiting up to 10m0s for PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc to get deleted
Sep  4 20:21:31.048: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Released (33.136659ms)
Sep  4 20:21:36.082: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (5.067420096s)
Sep  4 20:21:41.116: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (10.101548308s)
Sep  4 20:21:46.151: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (15.136069292s)
Sep  4 20:21:51.188: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (20.173197399s)
Sep  4 20:21:56.223: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (25.208591813s)
Sep  4 20:22:01.258: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (30.243079843s)
Sep  4 20:22:06.295: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc found and phase=Failed (35.280232141s)
Sep  4 20:22:11.328: INFO: PersistentVolume pvc-7d744a11-53a2-496c-b906-1636e0e14dbc was removed
Sep  4 20:22:11.328: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  4 20:22:11.361: INFO: Claim "azuredisk-8081" in namespace "pvc-pjmjm" doesn't exist in the system
Sep  4 20:22:11.361: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-sz754
Sep  4 20:22:11.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  4 20:22:35.435: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-zzskn"
Sep  4 20:22:35.481: INFO: Error getting logs for pod azuredisk-volume-tester-zzskn: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-zzskn)
STEP: Deleting pod azuredisk-volume-tester-zzskn in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:22:35.585: INFO: deleting PVC "azuredisk-5466"/"pvc-8n97s"
Sep  4 20:22:35.585: INFO: Deleting PersistentVolumeClaim "pvc-8n97s"
STEP: waiting for claim's PV "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" to be deleted
Sep  4 20:22:35.620: INFO: Waiting up to 10m0s for PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 to get deleted
Sep  4 20:22:35.655: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Bound (34.545301ms)
Sep  4 20:22:40.691: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (5.070941145s)
Sep  4 20:22:45.727: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (10.107014705s)
Sep  4 20:22:50.763: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (15.142447699s)
Sep  4 20:22:55.796: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (20.176254492s)
Sep  4 20:23:00.834: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (25.214143826s)
Sep  4 20:23:05.870: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (30.249898374s)
Sep  4 20:23:10.904: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (35.284209666s)
Sep  4 20:23:15.938: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (40.317659838s)
Sep  4 20:23:20.971: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 found and phase=Failed (45.351286746s)
Sep  4 20:23:26.006: INFO: PersistentVolume pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 was removed
Sep  4 20:23:26.006: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  4 20:23:26.039: INFO: Claim "azuredisk-5466" in namespace "pvc-8n97s" doesn't exist in the system
Sep  4 20:23:26.040: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-zxjgc
Sep  4 20:23:26.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:23:26.777: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-c98mc" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  4 20:23:26.809: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.536558ms
Sep  4 20:23:28.843: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066092196s
Sep  4 20:23:30.878: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100712646s
Sep  4 20:23:32.912: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135469545s
Sep  4 20:23:34.947: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169657671s
Sep  4 20:23:36.981: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203610875s
... skipping 2 lines ...
Sep  4 20:23:43.086: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.309258166s
Sep  4 20:23:45.121: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.343797966s
Sep  4 20:23:47.155: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.377962598s
Sep  4 20:23:49.191: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.414538949s
Sep  4 20:23:51.228: INFO: Pod "azuredisk-volume-tester-c98mc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.450633733s
STEP: Saw pod success
Sep  4 20:23:51.228: INFO: Pod "azuredisk-volume-tester-c98mc" satisfied condition "Succeeded or Failed"
Sep  4 20:23:51.228: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-c98mc"
Sep  4 20:23:51.264: INFO: Pod azuredisk-volume-tester-c98mc has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-c98mc in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:23:51.384: INFO: deleting PVC "azuredisk-2790"/"pvc-9mkxc"
Sep  4 20:23:51.384: INFO: Deleting PersistentVolumeClaim "pvc-9mkxc"
STEP: waiting for claim's PV "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" to be deleted
Sep  4 20:23:51.418: INFO: Waiting up to 10m0s for PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 to get deleted
Sep  4 20:23:51.450: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Released (32.349564ms)
Sep  4 20:23:56.487: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Failed (5.068493122s)
Sep  4 20:24:01.521: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Failed (10.102689683s)
Sep  4 20:24:06.559: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Failed (15.141353081s)
Sep  4 20:24:11.597: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Failed (20.178491378s)
Sep  4 20:24:16.631: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Failed (25.212872723s)
Sep  4 20:24:21.667: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 found and phase=Failed (30.249409514s)
Sep  4 20:24:26.703: INFO: PersistentVolume pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 was removed
Sep  4 20:24:26.703: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  4 20:24:26.735: INFO: Claim "azuredisk-2790" in namespace "pvc-9mkxc" doesn't exist in the system
Sep  4 20:24:26.735: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-ssqhg
Sep  4 20:24:26.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  4 20:24:27.453: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kdpxs" in namespace "azuredisk-5356" to be "Error status code"
Sep  4 20:24:27.486: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 32.254247ms
Sep  4 20:24:29.519: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066051804s
Sep  4 20:24:31.566: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113172297s
Sep  4 20:24:33.601: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147591071s
Sep  4 20:24:35.635: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181432257s
Sep  4 20:24:37.668: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.215215056s
Sep  4 20:24:39.704: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.250647347s
Sep  4 20:24:41.739: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.285745973s
Sep  4 20:24:43.773: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.319346399s
Sep  4 20:24:45.806: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.353169975s
Sep  4 20:24:47.840: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 20.386938008s
Sep  4 20:24:49.873: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Pending", Reason="", readiness=false. Elapsed: 22.420003341s
Sep  4 20:24:51.909: INFO: Pod "azuredisk-volume-tester-kdpxs": Phase="Failed", Reason="", readiness=false. Elapsed: 24.456179124s
STEP: Saw pod failure
Sep  4 20:24:51.910: INFO: Pod "azuredisk-volume-tester-kdpxs" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  4 20:24:51.951: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-kdpxs"
Sep  4 20:24:51.989: INFO: Pod azuredisk-volume-tester-kdpxs has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-kdpxs in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:24:52.096: INFO: deleting PVC "azuredisk-5356"/"pvc-ptkps"
Sep  4 20:24:52.096: INFO: Deleting PersistentVolumeClaim "pvc-ptkps"
STEP: waiting for claim's PV "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" to be deleted
Sep  4 20:24:52.131: INFO: Waiting up to 10m0s for PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd to get deleted
Sep  4 20:24:52.164: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Released (32.351626ms)
Sep  4 20:24:57.198: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Failed (5.066644575s)
Sep  4 20:25:02.234: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Failed (10.103271263s)
Sep  4 20:25:07.276: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Failed (15.145184871s)
Sep  4 20:25:12.311: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Failed (20.179426366s)
Sep  4 20:25:17.345: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Failed (25.213826492s)
Sep  4 20:25:22.379: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd found and phase=Failed (30.247944559s)
Sep  4 20:25:27.416: INFO: PersistentVolume pvc-3830149c-1f6b-4eec-886e-f55736bb11cd was removed
Sep  4 20:25:27.416: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  4 20:25:27.449: INFO: Claim "azuredisk-5356" in namespace "pvc-ptkps" doesn't exist in the system
Sep  4 20:25:27.449: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-mpxxs
Sep  4 20:25:27.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 55 lines ...
Sep  4 20:26:43.941: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Bound (15.144804814s)
Sep  4 20:26:48.974: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Bound (20.177853944s)
Sep  4 20:26:54.011: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Bound (25.215511676s)
Sep  4 20:26:59.045: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Bound (30.248814432s)
Sep  4 20:27:04.082: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Bound (35.286424254s)
Sep  4 20:27:09.120: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Bound (40.323624684s)
Sep  4 20:27:14.154: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Failed (45.357795765s)
Sep  4 20:27:19.191: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Failed (50.394690962s)
Sep  4 20:27:24.226: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Failed (55.430345455s)
Sep  4 20:27:29.262: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Failed (1m0.466208548s)
Sep  4 20:27:34.298: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df found and phase=Failed (1m5.502036885s)
Sep  4 20:27:39.334: INFO: PersistentVolume pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df was removed
Sep  4 20:27:39.334: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:27:39.368: INFO: Claim "azuredisk-5194" in namespace "pvc-fg4r8" doesn't exist in the system
Sep  4 20:27:39.368: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-5zkj2
Sep  4 20:27:39.403: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-jlrx8"
Sep  4 20:27:39.447: INFO: Pod azuredisk-volume-tester-jlrx8 has the following logs: 
... skipping 8 lines ...
Sep  4 20:27:44.653: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Bound (5.068034544s)
Sep  4 20:27:49.688: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Bound (10.103140201s)
Sep  4 20:27:54.721: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Bound (15.136807958s)
Sep  4 20:27:59.757: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Bound (20.172906385s)
Sep  4 20:28:04.790: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Bound (25.205646047s)
Sep  4 20:28:09.826: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Bound (30.241178246s)
Sep  4 20:28:14.862: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (35.277516232s)
Sep  4 20:28:19.896: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (40.311758343s)
Sep  4 20:28:24.930: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (45.345223847s)
Sep  4 20:28:29.966: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (50.380956132s)
Sep  4 20:28:35.002: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (55.417452203s)
Sep  4 20:28:40.039: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (1m0.453932987s)
Sep  4 20:28:45.075: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (1m5.490148115s)
Sep  4 20:28:50.109: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 found and phase=Failed (1m10.524466466s)
Sep  4 20:28:55.146: INFO: PersistentVolume pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 was removed
Sep  4 20:28:55.146: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:28:55.179: INFO: Claim "azuredisk-5194" in namespace "pvc-f8chf" doesn't exist in the system
Sep  4 20:28:55.179: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-bgnqx
Sep  4 20:28:55.214: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-l7pcc"
Sep  4 20:28:55.267: INFO: Pod azuredisk-volume-tester-l7pcc has the following logs: 
... skipping 9 lines ...
Sep  4 20:29:05.510: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Bound (10.10452893s)
Sep  4 20:29:10.559: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Bound (15.152983481s)
Sep  4 20:29:15.595: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Bound (20.188851768s)
Sep  4 20:29:20.632: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Bound (25.226379489s)
Sep  4 20:29:25.667: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Bound (30.260810706s)
Sep  4 20:29:30.703: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Bound (35.296838649s)
Sep  4 20:29:35.740: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (40.333814914s)
Sep  4 20:29:40.776: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (45.370384938s)
Sep  4 20:29:45.810: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (50.404088458s)
Sep  4 20:29:50.849: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (55.442912045s)
Sep  4 20:29:55.883: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (1m0.47740856s)
Sep  4 20:30:00.921: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (1m5.514995623s)
Sep  4 20:30:05.958: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 found and phase=Failed (1m10.552112847s)
Sep  4 20:30:10.994: INFO: PersistentVolume pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 was removed
Sep  4 20:30:10.994: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  4 20:30:11.026: INFO: Claim "azuredisk-5194" in namespace "pvc-tb4zj" doesn't exist in the system
Sep  4 20:30:11.026: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-hnwjg
Sep  4 20:30:11.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 64 lines ...
Sep  4 20:32:01.066: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Bound (15.138372985s)
Sep  4 20:32:06.103: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Bound (20.17478518s)
Sep  4 20:32:11.137: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Bound (25.209081237s)
Sep  4 20:32:16.171: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Bound (30.243460722s)
Sep  4 20:32:21.208: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Bound (35.280158318s)
Sep  4 20:32:26.247: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Bound (40.318996945s)
Sep  4 20:32:31.282: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Failed (45.35421945s)
Sep  4 20:32:36.319: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Failed (50.391715382s)
Sep  4 20:32:41.355: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Failed (55.427362435s)
Sep  4 20:32:46.390: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 found and phase=Failed (1m0.462638663s)
Sep  4 20:32:51.424: INFO: PersistentVolume pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 was removed
Sep  4 20:32:51.425: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  4 20:32:51.458: INFO: Claim "azuredisk-1353" in namespace "pvc-rvtlw" doesn't exist in the system
Sep  4 20:32:51.458: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-7l7hr
Sep  4 20:32:51.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:33:10.095: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t7djp" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  4 20:33:10.130: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 35.207712ms
Sep  4 20:33:12.170: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07458954s
Sep  4 20:33:14.213: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11821398s
Sep  4 20:33:16.249: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154076328s
Sep  4 20:33:18.285: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189565024s
Sep  4 20:33:20.322: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.226806155s
... skipping 10 lines ...
Sep  4 20:33:42.732: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 32.637014476s
Sep  4 20:33:44.770: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 34.67439708s
Sep  4 20:33:46.806: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 36.711143021s
Sep  4 20:33:48.843: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Pending", Reason="", readiness=false. Elapsed: 38.747404042s
Sep  4 20:33:50.879: INFO: Pod "azuredisk-volume-tester-t7djp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.78376587s
STEP: Saw pod success
Sep  4 20:33:50.879: INFO: Pod "azuredisk-volume-tester-t7djp" satisfied condition "Succeeded or Failed"
Sep  4 20:33:50.879: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-t7djp"
Sep  4 20:33:50.927: INFO: Pod azuredisk-volume-tester-t7djp has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-t7djp in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:33:51.052: INFO: deleting PVC "azuredisk-59"/"pvc-sbb4q"
Sep  4 20:33:51.052: INFO: Deleting PersistentVolumeClaim "pvc-sbb4q"
STEP: waiting for claim's PV "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" to be deleted
Sep  4 20:33:51.087: INFO: Waiting up to 10m0s for PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef to get deleted
Sep  4 20:33:51.119: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Released (32.203959ms)
Sep  4 20:33:56.154: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Failed (5.067710411s)
Sep  4 20:34:01.191: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Failed (10.103974513s)
Sep  4 20:34:06.226: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Failed (15.139428485s)
Sep  4 20:34:11.262: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Failed (20.175456863s)
Sep  4 20:34:16.299: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Failed (25.212394403s)
Sep  4 20:34:21.336: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef found and phase=Failed (30.249625412s)
Sep  4 20:34:26.370: INFO: PersistentVolume pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef was removed
Sep  4 20:34:26.370: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  4 20:34:26.403: INFO: Claim "azuredisk-59" in namespace "pvc-sbb4q" doesn't exist in the system
Sep  4 20:34:26.403: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-dfx8s
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:34:26.505: INFO: deleting PVC "azuredisk-59"/"pvc-8ddrz"
Sep  4 20:34:26.505: INFO: Deleting PersistentVolumeClaim "pvc-8ddrz"
STEP: waiting for claim's PV "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" to be deleted
Sep  4 20:34:26.539: INFO: Waiting up to 10m0s for PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 to get deleted
Sep  4 20:34:26.575: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 found and phase=Bound (35.877193ms)
Sep  4 20:34:31.612: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 found and phase=Failed (5.072875508s)
Sep  4 20:34:36.648: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 found and phase=Failed (10.108117701s)
Sep  4 20:34:41.687: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 found and phase=Failed (15.147482416s)
Sep  4 20:34:46.723: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 found and phase=Failed (20.183211994s)
Sep  4 20:34:51.759: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 found and phase=Failed (25.219619716s)
Sep  4 20:34:56.796: INFO: PersistentVolume pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 was removed
Sep  4 20:34:56.796: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  4 20:34:56.830: INFO: Claim "azuredisk-59" in namespace "pvc-8ddrz" doesn't exist in the system
Sep  4 20:34:56.830: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-ngbhb
STEP: validating provisioned PV
STEP: checking the PV
... skipping 39 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:35:08.108: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-wdrfq" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  4 20:35:08.145: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.597706ms
Sep  4 20:35:10.185: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077120067s
Sep  4 20:35:12.222: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113543159s
Sep  4 20:35:14.257: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.148558878s
Sep  4 20:35:16.294: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.185430481s
Sep  4 20:35:18.329: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221328529s
... skipping 9 lines ...
Sep  4 20:35:38.698: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 30.589496984s
Sep  4 20:35:40.740: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.632119892s
Sep  4 20:35:42.776: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.667441698s
Sep  4 20:35:44.813: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.705216654s
Sep  4 20:35:46.849: INFO: Pod "azuredisk-volume-tester-wdrfq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.741274807s
STEP: Saw pod success
Sep  4 20:35:46.850: INFO: Pod "azuredisk-volume-tester-wdrfq" satisfied condition "Succeeded or Failed"
Sep  4 20:35:46.850: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-wdrfq"
Sep  4 20:35:46.892: INFO: Pod azuredisk-volume-tester-wdrfq has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.065531 seconds, 1.5GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  4 20:35:47.004: INFO: deleting PVC "azuredisk-2546"/"pvc-br577"
Sep  4 20:35:47.004: INFO: Deleting PersistentVolumeClaim "pvc-br577"
STEP: waiting for claim's PV "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" to be deleted
Sep  4 20:35:47.039: INFO: Waiting up to 10m0s for PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 to get deleted
Sep  4 20:35:47.074: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Released (34.282495ms)
Sep  4 20:35:52.112: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (5.072591996s)
Sep  4 20:35:57.146: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (10.106193097s)
Sep  4 20:36:02.182: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (15.142926821s)
Sep  4 20:36:07.218: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (20.178568156s)
Sep  4 20:36:12.254: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (25.214729388s)
Sep  4 20:36:17.290: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (30.250748811s)
Sep  4 20:36:22.329: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 found and phase=Failed (35.29002505s)
Sep  4 20:36:27.368: INFO: PersistentVolume pvc-e5557e4e-e96a-4949-b559-ec237dd73264 was removed
Sep  4 20:36:27.368: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  4 20:36:27.400: INFO: Claim "azuredisk-2546" in namespace "pvc-br577" doesn't exist in the system
Sep  4 20:36:27.400: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-pm7ls
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  4 20:36:39.901: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-wbpkq" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  4 20:36:39.933: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 31.985352ms
Sep  4 20:36:41.968: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066875908s
Sep  4 20:36:44.002: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100822244s
Sep  4 20:36:46.038: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136651995s
Sep  4 20:36:48.072: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171503941s
Sep  4 20:36:50.107: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206166648s
... skipping 10 lines ...
Sep  4 20:37:12.499: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 32.598582757s
Sep  4 20:37:14.535: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.633828288s
Sep  4 20:37:16.573: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.672283125s
Sep  4 20:37:18.609: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Running", Reason="", readiness=true. Elapsed: 38.708312187s
Sep  4 20:37:20.645: INFO: Pod "azuredisk-volume-tester-wbpkq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.743768361s
STEP: Saw pod success
Sep  4 20:37:20.645: INFO: Pod "azuredisk-volume-tester-wbpkq" satisfied condition "Succeeded or Failed"
Sep  4 20:37:20.645: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-wbpkq"
Sep  4 20:37:20.687: INFO: Pod azuredisk-volume-tester-wbpkq has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-wbpkq in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:37:20.795: INFO: deleting PVC "azuredisk-8582"/"pvc-kwgng"
Sep  4 20:37:20.795: INFO: Deleting PersistentVolumeClaim "pvc-kwgng"
STEP: waiting for claim's PV "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" to be deleted
Sep  4 20:37:20.829: INFO: Waiting up to 10m0s for PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 to get deleted
Sep  4 20:37:20.861: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Released (32.044854ms)
Sep  4 20:37:25.899: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (5.069478013s)
Sep  4 20:37:30.935: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (10.105424086s)
Sep  4 20:37:35.973: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (15.143410059s)
Sep  4 20:37:41.008: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (20.178410432s)
Sep  4 20:37:46.044: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (25.214849898s)
Sep  4 20:37:51.078: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (30.248990451s)
Sep  4 20:37:56.112: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (35.283049216s)
Sep  4 20:38:01.150: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (40.320417133s)
Sep  4 20:38:06.188: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 found and phase=Failed (45.358211133s)
Sep  4 20:38:11.224: INFO: PersistentVolume pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 was removed
Sep  4 20:38:11.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  4 20:38:11.257: INFO: Claim "azuredisk-8582" in namespace "pvc-kwgng" doesn't exist in the system
Sep  4 20:38:11.257: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-84xsc
STEP: validating provisioned PV
STEP: checking the PV
... skipping 173 lines ...
STEP: validating provisioned PV
STEP: checking the PV
Sep  4 20:39:44.732: INFO: deleting PVC "azuredisk-7051"/"pvc-hbwx9"
Sep  4 20:39:44.732: INFO: Deleting PersistentVolumeClaim "pvc-hbwx9"
STEP: waiting for claim's PV "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" to be deleted
Sep  4 20:39:44.771: INFO: Waiting up to 10m0s for PersistentVolume pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a to get deleted
Sep  4 20:39:44.804: INFO: PersistentVolume pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a found and phase=Failed (32.406025ms)
Sep  4 20:39:49.839: INFO: PersistentVolume pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a found and phase=Failed (5.067802943s)
Sep  4 20:39:54.876: INFO: PersistentVolume pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a was removed
Sep  4 20:39:54.876: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7051 to be removed
Sep  4 20:39:54.908: INFO: Claim "azuredisk-7051" in namespace "pvc-hbwx9" doesn't exist in the system
Sep  4 20:39:54.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7051" for this suite.

... skipping 217 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  4 20:41:21.878: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.308 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 248 lines ...
I0904 20:15:06.749606       1 tlsconfig.go:178] loaded client CA [1/"client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt"]: "kubernetes" [] issuer="<self>" (2022-09-04 20:08:04 +0000 UTC to 2032-09-01 20:13:04 +0000 UTC (now=2022-09-04 20:15:06.749589125 +0000 UTC))
I0904 20:15:06.749920       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662322505" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662322504" (2022-09-04 19:15:04 +0000 UTC to 2023-09-04 19:15:04 +0000 UTC (now=2022-09-04 20:15:06.749905153 +0000 UTC))
I0904 20:15:06.750232       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662322506" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662322506" (2022-09-04 19:15:05 +0000 UTC to 2023-09-04 19:15:05 +0000 UTC (now=2022-09-04 20:15:06.75021438 +0000 UTC))
I0904 20:15:06.750328       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0904 20:15:06.750390       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0904 20:15:06.750976       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0904 20:15:08.134189       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 20:15:08.134217       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0904 20:15:12.288673       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0904 20:15:12.289081       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-7sh698-control-plane-sh4jc_c70078f7-54e4-47e3-8fa1-f49df4347487 became leader"
I0904 20:15:12.633064       1 request.go:600] Waited for 91.996042ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/coordination.k8s.io/v1beta1?timeout=32s
I0904 20:15:12.682919       1 request.go:600] Waited for 141.836519ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
I0904 20:15:12.732933       1 request.go:600] Waited for 187.892042ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1?timeout=32s
I0904 20:15:12.783751       1 request.go:600] Waited for 238.664139ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/scheduling.k8s.io/v1beta1?timeout=32s
... skipping 39 lines ...
I0904 20:15:13.087865       1 reflector.go:219] Starting reflector *v1.Secret (20h34m42.502662467s) from k8s.io/client-go/informers/factory.go:134
I0904 20:15:13.087938       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0904 20:15:13.088053       1 reflector.go:219] Starting reflector *v1.ServiceAccount (20h34m42.502662467s) from k8s.io/client-go/informers/factory.go:134
I0904 20:15:13.088191       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0904 20:15:13.088481       1 reflector.go:219] Starting reflector *v1.Node (20h34m42.502662467s) from k8s.io/client-go/informers/factory.go:134
I0904 20:15:13.088538       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
W0904 20:15:13.109227       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 20:15:13.109254       1 controllermanager.go:559] Starting "serviceaccount"
I0904 20:15:13.114979       1 controllermanager.go:574] Started "serviceaccount"
I0904 20:15:13.115001       1 controllermanager.go:559] Starting "job"
I0904 20:15:13.115191       1 serviceaccounts_controller.go:117] Starting service account controller
I0904 20:15:13.115209       1 shared_informer.go:240] Waiting for caches to sync for service account
I0904 20:15:13.123390       1 controllermanager.go:574] Started "job"
... skipping 88 lines ...
I0904 20:15:14.641415       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0904 20:15:14.641441       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0904 20:15:14.641495       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0904 20:15:14.641516       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0904 20:15:14.641531       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0904 20:15:14.641577       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0904 20:15:14.641650       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 20:15:14.641665       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0904 20:15:14.642003       1 controllermanager.go:574] Started "attachdetach"
I0904 20:15:14.642022       1 controllermanager.go:559] Starting "replicationcontroller"
I0904 20:15:14.642805       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-control-plane-sh4jc"
W0904 20:15:14.642843       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7sh698-control-plane-sh4jc" does not exist
I0904 20:15:14.642920       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 20:15:14.642976       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0904 20:15:14.792028       1 controllermanager.go:574] Started "replicationcontroller"
I0904 20:15:14.792320       1 controllermanager.go:559] Starting "csrapproving"
I0904 20:15:14.792238       1 replica_set.go:182] Starting replicationcontroller controller
I0904 20:15:14.792639       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
... skipping 78 lines ...
I0904 20:15:16.530625       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0904 20:15:16.530776       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0904 20:15:16.530889       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0904 20:15:16.531024       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0904 20:15:16.531132       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0904 20:15:16.531269       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0904 20:15:16.531378       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 20:15:16.531514       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0904 20:15:16.531709       1 controllermanager.go:574] Started "persistentvolume-binder"
I0904 20:15:16.531880       1 controllermanager.go:559] Starting "clusterrole-aggregation"
I0904 20:15:16.531850       1 pv_controller_base.go:308] Starting persistent volume controller
I0904 20:15:16.532148       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0904 20:15:16.641268       1 controllermanager.go:574] Started "clusterrole-aggregation"
... skipping 358 lines ...
I0904 20:15:17.785866       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0904 20:15:17.785992       1 endpointslicemirroring_controller.go:218] Starting 5 worker threads
I0904 20:15:17.786143       1 endpointslicemirroring_controller.go:273] syncEndpoints("default/kubernetes")
I0904 20:15:17.786281       1 endpointslicemirroring_controller.go:291] default/kubernetes Endpoints should not be mirrored, cleaning up any mirrored EndpointSlices
I0904 20:15:17.786421       1 endpointslicemirroring_controller.go:270] Finished syncing EndpointSlices for "default/kubernetes" Endpoints. (278.617µs)
I0904 20:15:17.772764       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="177.373557ms"
I0904 20:15:17.786690       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:15:17.786841       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:15:17.786819618 +0000 UTC m=+13.714194449"
I0904 20:15:17.787586       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 20:15:17 +0000 UTC - now: 2022-09-04 20:15:17.787580165 +0000 UTC m=+13.714955096]
I0904 20:15:17.784211       1 shared_informer.go:270] caches populated
I0904 20:15:17.788098       1 shared_informer.go:247] Caches are synced for endpoint 
I0904 20:15:17.784269       1 request.go:600] Waited for 214.67106ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/batch/v1/jobs?limit=500&resourceVersion=0
I0904 20:15:17.788580       1 endpoints_controller.go:381] Finished syncing service "default/kubernetes" endpoints. (2.9µs)
I0904 20:15:17.788750       1 endpoints_controller.go:555] Update endpoints for kube-system/kube-dns, ready: 0 not ready: 0
I0904 20:15:17.784297       1 graph_builder.go:279] garbage controller monitor not yet synced: node.k8s.io/v1, Resource=runtimeclasses
I0904 20:15:17.784315       1 resource_quota_monitor.go:294] quota monitor not synced: batch/v1, Resource=jobs
I0904 20:15:17.785615       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="191.714643ms"
I0904 20:15:17.789356       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:15:17.789496       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 20:15:17.789478982 +0000 UTC m=+13.716853813"
I0904 20:15:17.800613       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 20:15:17 +0000 UTC - now: 2022-09-04 20:15:17.80060517 +0000 UTC m=+13.727980001]
I0904 20:15:17.801019       1 graph_builder.go:279] garbage controller monitor not yet synced: admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations
I0904 20:15:17.801368       1 request.go:600] Waited for 231.755216ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/podtemplates?limit=500&resourceVersion=0
I0904 20:15:17.801613       1 request.go:600] Waited for 183.877858ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/endpointslice-controller
I0904 20:15:17.813135       1 shared_informer.go:270] caches populated
... skipping 5 lines ...
I0904 20:15:17.817900       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="28.407955ms"
I0904 20:15:17.818069       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 20:15:17.818051047 +0000 UTC m=+13.745425878"
I0904 20:15:17.818535       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 20:15:17 +0000 UTC - now: 2022-09-04 20:15:17.818528877 +0000 UTC m=+13.745903808]
I0904 20:15:17.819127       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0904 20:15:17.819288       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0904 20:15:17.831075       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="14.447593ms"
I0904 20:15:17.837303       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:15:17.837549       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 20:15:17.837492748 +0000 UTC m=+13.764867679"
I0904 20:15:17.837771       1 shared_informer.go:270] caches populated
I0904 20:15:17.837913       1 shared_informer.go:247] Caches are synced for job 
I0904 20:15:17.831356       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="13.292722ms"
I0904 20:15:17.838125       1 request.go:600] Waited for 188.001713ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/clusterrole-aggregation-controller
I0904 20:15:17.838600       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:15:17.844222       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 20:15:17.844200063 +0000 UTC m=+13.771574894"
I0904 20:15:17.845653       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 20:15:17 +0000 UTC - now: 2022-09-04 20:15:17.845646552 +0000 UTC m=+13.773021483]
I0904 20:15:17.845991       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 599s
I0904 20:15:17.846243       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="2.033225ms"
I0904 20:15:17.845925       1 shared_informer.go:270] caches populated
I0904 20:15:17.846522       1 shared_informer.go:247] Caches are synced for TTL after finished 
... skipping 601 lines ...
I0904 20:15:51.376693       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:15:51.376701       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 20:15:51.376707       1 controller.go:731] It took 1.56e-05 seconds to finish nodeSyncInternal
I0904 20:15:51.376776       1 controller_utils.go:209] Added [] Taint to Node capz-7sh698-control-plane-sh4jc
I0904 20:15:51.398450       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-control-plane-sh4jc"
I0904 20:15:51.398971       1 controller_utils.go:221] Made sure that Node capz-7sh698-control-plane-sh4jc has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0904 20:15:52.674574       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-7sh698-control-plane-sh4jc transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:15:31 +0000 UTC,LastTransitionTime:2022-09-04 20:14:51 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:15:51 +0000 UTC,LastTransitionTime:2022-09-04 20:15:51 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:15:52.674668       1 node_lifecycle_controller.go:1047] Node capz-7sh698-control-plane-sh4jc ReadyCondition updated. Updating timestamp.
I0904 20:15:52.674691       1 node_lifecycle_controller.go:893] Node capz-7sh698-control-plane-sh4jc is healthy again, removing all taints
I0904 20:15:52.674710       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 20:15:55.615063       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/coredns-558bd4d5db-9q77m"
I0904 20:15:55.616090       1 disruption.go:427] updatePod called on pod "coredns-558bd4d5db-9q77m"
I0904 20:15:55.616298       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-558bd4d5db-9q77m, PodDisruptionBudget controller will avoid syncing.
... skipping 249 lines ...
I0904 20:17:08.841244       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 20:17:08.841326       1 controller.go:731] It took 0.000177302 seconds to finish nodeSyncInternal
I0904 20:17:08.842056       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd5fb82eabc6f4, ext:24710384455, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:08.842818       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd5fd1323c4684, ext:124770185843, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:08.842849       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-7sh698-mp-0000001], creating 1
I0904 20:17:08.848224       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
W0904 20:17:08.848252       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7sh698-mp-0000001" does not exist
I0904 20:17:08.849629       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd5fbcd66ad921, ext:43303476084, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:08.849736       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd5fd132a5e20b, ext:124777107038, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:08.849786       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-7sh698-mp-0000001], creating 1
I0904 20:17:08.862289       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:17:08.864323       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-7sh698-mp-0000001" new_ttl="0s"
I0904 20:17:08.874006       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/kube-proxy-qwcht"
... skipping 131 lines ...
I0904 20:17:13.581906       1 controller.go:272] Triggering nodeSync
I0904 20:17:13.581927       1 controller.go:291] nodeSync has been triggered
I0904 20:17:13.581934       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 20:17:13.581942       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 20:17:13.582291       1 controller.go:731] It took 1.4501e-05 seconds to finish nodeSyncInternal
I0904 20:17:13.580310       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000000"
W0904 20:17:13.582345       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-7sh698-mp-0000000" does not exist
I0904 20:17:13.581363       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd5fd1ebaa6758, ext:127659962695, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:13.582557       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd5fd262b90072, ext:129509924449, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:13.582600       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-7sh698-mp-0000000], creating 1
I0904 20:17:13.583286       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd5fd1d406423f, ext:127263329326, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:13.588310       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd5fd26310cf3e, ext:129515679121, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:13.588475       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-7sh698-mp-0000000], creating 1
... skipping 344 lines ...
I0904 20:17:39.239638       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd5fd8ce3fce14, ext:155166437479, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:39.239895       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd5fd8ce4c7146, ext:155167265689, loc:(*time.Location)(0x731ea80)}}
I0904 20:17:39.240101       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0904 20:17:39.240374       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0904 20:17:39.240536       1 daemon_controller.go:1103] Updating daemon set status
I0904 20:17:39.240783       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (3.486144ms)
I0904 20:17:42.693967       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-7sh698-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:17:18 +0000 UTC,LastTransitionTime:2022-09-04 20:17:08 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:17:38 +0000 UTC,LastTransitionTime:2022-09-04 20:17:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:17:42.694054       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:17:42.706863       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-7sh698-mp-0000001}
I0904 20:17:42.707064       1 taint_manager.go:440] "Updating known taints on node" node="capz-7sh698-mp-0000001" taints=[]
I0904 20:17:42.707183       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-7sh698-mp-0000001"
I0904 20:17:42.707366       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:17:42.708315       1 node_lifecycle_controller.go:893] Node capz-7sh698-mp-0000001 is healthy again, removing all taints
... skipping 55 lines ...
I0904 20:17:46.452948       1 daemon_controller.go:1103] Updating daemon set status
I0904 20:17:46.453042       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (1.814422ms)
I0904 20:17:47.363212       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="79.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:45788" resp=200
I0904 20:17:47.591567       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:17:47.592646       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:17:47.670115       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:17:47.709075       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-7sh698-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 20:17:23 +0000 UTC,LastTransitionTime:2022-09-04 20:17:13 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 20:17:43 +0000 UTC,LastTransitionTime:2022-09-04 20:17:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 20:17:47.709135       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:17:47.728255       1 node_lifecycle_controller.go:893] Node capz-7sh698-mp-0000000 is healthy again, removing all taints
I0904 20:17:47.729082       1 node_lifecycle_controller.go:1214] Controller detected that zone canadacentral::0 is now in state Normal.
I0904 20:17:47.729594       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000000"
I0904 20:17:47.729920       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-7sh698-mp-0000000}
I0904 20:17:47.730086       1 taint_manager.go:440] "Updating known taints on node" node="capz-7sh698-mp-0000000" taints=[]
... skipping 363 lines ...
I0904 20:21:31.015595       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: claim azuredisk-8081/pvc-pjmjm not found
I0904 20:21:31.015602       1 pv_controller.go:1108] reclaimVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: policy is Delete
I0904 20:21:31.015616       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]]
I0904 20:21:31.015626       1 pv_controller.go:1764] operation "delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]" is already running, skipping
I0904 20:21:31.017304       1 pv_controller.go:1341] isVolumeReleased[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is released
I0904 20:21:31.017325       1 pv_controller.go:1405] doDeleteVolume [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]
I0904 20:21:31.063374       1 pv_controller.go:1260] deletion of volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
I0904 20:21:31.063402       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: set phase Failed
I0904 20:21:31.063414       1 pv_controller.go:858] updating PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: set phase Failed
I0904 20:21:31.067176       1 pv_protection_controller.go:205] Got event on PV pvc-7d744a11-53a2-496c-b906-1636e0e14dbc
I0904 20:21:31.067289       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" with version 1337
I0904 20:21:31.067875       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase: Failed, bound to: "azuredisk-8081/pvc-pjmjm (uid: 7d744a11-53a2-496c-b906-1636e0e14dbc)", boundByController: true
I0904 20:21:31.067963       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is bound to claim azuredisk-8081/pvc-pjmjm
I0904 20:21:31.068021       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: claim azuredisk-8081/pvc-pjmjm not found
I0904 20:21:31.068034       1 pv_controller.go:1108] reclaimVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: policy is Delete
I0904 20:21:31.068056       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]]
I0904 20:21:31.068098       1 pv_controller.go:1764] operation "delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]" is already running, skipping
I0904 20:21:31.068526       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" with version 1337
I0904 20:21:31.068550       1 pv_controller.go:879] volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" entered phase "Failed"
I0904 20:21:31.068562       1 pv_controller.go:901] volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
E0904 20:21:31.068643       1 goroutinemap.go:150] Operation for "delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]" failed. No retries permitted until 2022-09-04 20:21:31.56859696 +0000 UTC m=+387.495971891 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:21:31.068918       1 event.go:291] "Event occurred" object="pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:21:31.696790       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 12 items received
I0904 20:21:32.602407       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:21:32.678848       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:21:32.678928       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" with version 1337
I0904 20:21:32.678971       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase: Failed, bound to: "azuredisk-8081/pvc-pjmjm (uid: 7d744a11-53a2-496c-b906-1636e0e14dbc)", boundByController: true
I0904 20:21:32.679012       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is bound to claim azuredisk-8081/pvc-pjmjm
I0904 20:21:32.679030       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: claim azuredisk-8081/pvc-pjmjm not found
I0904 20:21:32.679038       1 pv_controller.go:1108] reclaimVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: policy is Delete
I0904 20:21:32.679055       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]]
I0904 20:21:32.679095       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc] started
I0904 20:21:32.682263       1 pv_controller.go:1341] isVolumeReleased[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is released
I0904 20:21:32.682282       1 pv_controller.go:1405] doDeleteVolume [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]
I0904 20:21:32.720273       1 pv_controller.go:1260] deletion of volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
I0904 20:21:32.720308       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: set phase Failed
I0904 20:21:32.720319       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase Failed already set
E0904 20:21:32.720624       1 goroutinemap.go:150] Operation for "delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]" failed. No retries permitted until 2022-09-04 20:21:33.720337848 +0000 UTC m=+389.647712779 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:21:33.898619       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000000"
I0904 20:21:33.898652       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc to the node "capz-7sh698-mp-0000000" mounted false
I0904 20:21:33.932354       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-7sh698-mp-0000000" succeeded. VolumesAttached: []
I0904 20:21:33.932745       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc") on node "capz-7sh698-mp-0000000" 
I0904 20:21:33.935309       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000000"
I0904 20:21:33.935699       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc to the node "capz-7sh698-mp-0000000" mounted false
... skipping 13 lines ...
I0904 20:21:45.555969       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 4 items received
I0904 20:21:47.368502       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="98µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:60882" resp=200
I0904 20:21:47.597543       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:21:47.602669       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:21:47.679625       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:21:47.679726       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" with version 1337
I0904 20:21:47.679793       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase: Failed, bound to: "azuredisk-8081/pvc-pjmjm (uid: 7d744a11-53a2-496c-b906-1636e0e14dbc)", boundByController: true
I0904 20:21:47.679832       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is bound to claim azuredisk-8081/pvc-pjmjm
I0904 20:21:47.679853       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: claim azuredisk-8081/pvc-pjmjm not found
I0904 20:21:47.679863       1 pv_controller.go:1108] reclaimVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: policy is Delete
I0904 20:21:47.679881       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]]
I0904 20:21:47.679930       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc] started
I0904 20:21:47.689871       1 pv_controller.go:1341] isVolumeReleased[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is released
I0904 20:21:47.689899       1 pv_controller.go:1405] doDeleteVolume [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]
I0904 20:21:47.689939       1 pv_controller.go:1260] deletion of volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) since it's in attaching or detaching state
I0904 20:21:47.689961       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: set phase Failed
I0904 20:21:47.689983       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase Failed already set
E0904 20:21:47.690016       1 goroutinemap.go:150] Operation for "delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]" failed. No retries permitted until 2022-09-04 20:21:49.689991812 +0000 UTC m=+405.617366743 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) since it's in attaching or detaching state"
I0904 20:21:47.770347       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:21:48.173044       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:21:49.339892       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) returned with <nil>
I0904 20:21:49.339959       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc) succeeded
I0904 20:21:49.339971       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc was detached from node:capz-7sh698-mp-0000000
I0904 20:21:49.340197       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc") on node "capz-7sh698-mp-0000000" 
... skipping 11 lines ...
I0904 20:21:57.703800       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:22:00.326096       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:22:00.578477       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0904 20:22:02.603608       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:22:02.679901       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:22:02.679965       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" with version 1337
I0904 20:22:02.680004       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase: Failed, bound to: "azuredisk-8081/pvc-pjmjm (uid: 7d744a11-53a2-496c-b906-1636e0e14dbc)", boundByController: true
I0904 20:22:02.680045       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is bound to claim azuredisk-8081/pvc-pjmjm
I0904 20:22:02.680062       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: claim azuredisk-8081/pvc-pjmjm not found
I0904 20:22:02.680070       1 pv_controller.go:1108] reclaimVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: policy is Delete
I0904 20:22:02.680088       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]]
I0904 20:22:02.680116       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc] started
I0904 20:22:02.683924       1 pv_controller.go:1341] isVolumeReleased[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is released
... skipping 2 lines ...
I0904 20:22:07.921194       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc
I0904 20:22:07.921238       1 pv_controller.go:1436] volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" deleted
I0904 20:22:07.921441       1 pv_controller.go:1284] deleteVolumeOperation [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: success
I0904 20:22:07.933455       1 pv_protection_controller.go:205] Got event on PV pvc-7d744a11-53a2-496c-b906-1636e0e14dbc
I0904 20:22:07.933723       1 pv_protection_controller.go:125] Processing PV pvc-7d744a11-53a2-496c-b906-1636e0e14dbc
I0904 20:22:07.933897       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" with version 1394
I0904 20:22:07.934464       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: phase: Failed, bound to: "azuredisk-8081/pvc-pjmjm (uid: 7d744a11-53a2-496c-b906-1636e0e14dbc)", boundByController: true
I0904 20:22:07.934612       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: volume is bound to claim azuredisk-8081/pvc-pjmjm
I0904 20:22:07.934718       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: claim azuredisk-8081/pvc-pjmjm not found
I0904 20:22:07.934825       1 pv_controller.go:1108] reclaimVolume[pvc-7d744a11-53a2-496c-b906-1636e0e14dbc]: policy is Delete
I0904 20:22:07.934921       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7d744a11-53a2-496c-b906-1636e0e14dbc[3772205a-cb20-42ac-a069-408d9e1ad69a]]
I0904 20:22:07.935035       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7d744a11-53a2-496c-b906-1636e0e14dbc] started
I0904 20:22:07.937918       1 pv_controller.go:1244] Volume "pvc-7d744a11-53a2-496c-b906-1636e0e14dbc" is already being deleted
... skipping 109 lines ...
I0904 20:22:16.391721       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981") from node "capz-7sh698-mp-0000001" 
I0904 20:22:16.391840       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" to node "capz-7sh698-mp-0000001".
I0904 20:22:16.438832       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" lun 0 to node "capz-7sh698-mp-0000001".
I0904 20:22:16.439082       1 azure_controller_vmss.go:101] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - attach disk(capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) with DiskEncryptionSetID()
I0904 20:22:16.463984       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0904 20:22:16.488544       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-8vz6q, uid c071e816-647e-4d60-9aed-ed8c4fd3fede, event type delete
E0904 20:22:16.504638       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-r4qjq" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0904 20:22:16.530387       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-cshgg.1711c1d76308e72f, uid ccbaa48b-68f5-4212-8bde-81d67a811da2, event type delete
I0904 20:22:16.534407       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-cshgg.1711c1d9f19914b0, uid 5a77d651-4171-4162-a6fe-c1a6237600dd, event type delete
I0904 20:22:16.540285       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-cshgg.1711c1dbe254c576, uid 88f23906-6e70-4430-9704-7280ac1f440d, event type delete
I0904 20:22:16.542090       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-cshgg.1711c1dc3c5d0446, uid 33ed0841-053c-4681-996a-e1d2439a56f9, event type delete
I0904 20:22:16.544864       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-cshgg.1711c1dc42fa992d, uid 0a1aaff2-2f12-4677-918d-ff67ea9b88b5, event type delete
I0904 20:22:16.547434       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-cshgg.1711c1dc49553570, uid 5da44915-8e83-4f53-a278-78f56fc81f38, event type delete
... skipping 6 lines ...
I0904 20:22:16.643958       1 publisher.go:181] Finished syncing namespace "azuredisk-8081" (3.841326ms)
I0904 20:22:16.649609       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8081, estimate: 0, errors: <nil>
I0904 20:22:16.649690       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (1.9µs)
I0904 20:22:16.657309       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (196.027059ms)
I0904 20:22:17.101039       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0904 20:22:17.124752       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-ll5gd, uid f8b304e2-5acb-48b2-95ad-30017d7ad355, event type delete
E0904 20:22:17.138993       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-qlm8t" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0904 20:22:17.152432       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0904 20:22:17.152701       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.5µs)
I0904 20:22:17.153136       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid 4e254213-e0e9-488a-8258-235555ab8c8d, event type delete
I0904 20:22:17.231611       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid b9747cf2-4640-4a0f-884b-8ef6ed230a7e, event type delete
I0904 20:22:17.233968       1 publisher.go:181] Finished syncing namespace "azuredisk-2540" (2.467517ms)
I0904 20:22:17.237708       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
... skipping 163 lines ...
I0904 20:22:39.215880       1 pv_controller.go:1108] reclaimVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: policy is Delete
I0904 20:22:39.215956       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]]
I0904 20:22:39.216029       1 pv_controller.go:1764] operation "delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]" is already running, skipping
I0904 20:22:39.215460       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981] started
I0904 20:22:39.218928       1 pv_controller.go:1341] isVolumeReleased[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is released
I0904 20:22:39.218944       1 pv_controller.go:1405] doDeleteVolume [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]
I0904 20:22:39.269397       1 pv_controller.go:1260] deletion of volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:22:39.269426       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: set phase Failed
I0904 20:22:39.269435       1 pv_controller.go:858] updating PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: set phase Failed
I0904 20:22:39.273544       1 pv_protection_controller.go:205] Got event on PV pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981
I0904 20:22:39.273801       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" with version 1508
I0904 20:22:39.273962       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase: Failed, bound to: "azuredisk-5466/pvc-8n97s (uid: fd484ad7-b7f0-41f6-8275-1bcb99060981)", boundByController: true
I0904 20:22:39.274102       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is bound to claim azuredisk-5466/pvc-8n97s
I0904 20:22:39.274241       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: claim azuredisk-5466/pvc-8n97s not found
I0904 20:22:39.274382       1 pv_controller.go:1108] reclaimVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: policy is Delete
I0904 20:22:39.274509       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]]
I0904 20:22:39.274605       1 pv_controller.go:1764] operation "delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]" is already running, skipping
I0904 20:22:39.275107       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" with version 1508
I0904 20:22:39.275132       1 pv_controller.go:879] volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" entered phase "Failed"
I0904 20:22:39.275141       1 pv_controller.go:901] volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
E0904 20:22:39.275229       1 goroutinemap.go:150] Operation for "delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]" failed. No retries permitted until 2022-09-04 20:22:39.775169164 +0000 UTC m=+455.702543995 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:22:39.275321       1 event.go:291] "Event occurred" object="pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:22:41.588628       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0904 20:22:41.695871       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:22:43.402886       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:22:46.857554       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0904 20:22:47.363268       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="81.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:45776" resp=200
I0904 20:22:47.598821       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:22:47.605116       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:22:47.681692       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:22:47.681794       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" with version 1508
I0904 20:22:47.681865       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase: Failed, bound to: "azuredisk-5466/pvc-8n97s (uid: fd484ad7-b7f0-41f6-8275-1bcb99060981)", boundByController: true
I0904 20:22:47.681915       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is bound to claim azuredisk-5466/pvc-8n97s
I0904 20:22:47.682002       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: claim azuredisk-5466/pvc-8n97s not found
I0904 20:22:47.682047       1 pv_controller.go:1108] reclaimVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: policy is Delete
I0904 20:22:47.682071       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]]
I0904 20:22:47.682130       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981] started
I0904 20:22:47.693275       1 pv_controller.go:1341] isVolumeReleased[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is released
I0904 20:22:47.693300       1 pv_controller.go:1405] doDeleteVolume [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]
I0904 20:22:47.748589       1 pv_controller.go:1260] deletion of volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:22:47.748621       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: set phase Failed
I0904 20:22:47.748631       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase Failed already set
E0904 20:22:47.748695       1 goroutinemap.go:150] Operation for "delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]" failed. No retries permitted until 2022-09-04 20:22:48.748639691 +0000 UTC m=+464.676014622 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:22:48.202100       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:22:48.705530       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 12 items received
I0904 20:22:49.194755       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:22:49.194798       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 to the node "capz-7sh698-mp-0000001" mounted false
I0904 20:22:49.207098       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-7sh698-mp-0000001" succeeded. VolumesAttached: []
I0904 20:22:49.207287       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981") on node "capz-7sh698-mp-0000001" 
... skipping 11 lines ...
I0904 20:22:57.705154       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:23:02.382695       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0904 20:23:02.587686       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 0 items received
I0904 20:23:02.605358       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:23:02.682226       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:23:02.682490       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" with version 1508
I0904 20:23:02.682566       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase: Failed, bound to: "azuredisk-5466/pvc-8n97s (uid: fd484ad7-b7f0-41f6-8275-1bcb99060981)", boundByController: true
I0904 20:23:02.682625       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is bound to claim azuredisk-5466/pvc-8n97s
I0904 20:23:02.682652       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: claim azuredisk-5466/pvc-8n97s not found
I0904 20:23:02.682676       1 pv_controller.go:1108] reclaimVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: policy is Delete
I0904 20:23:02.682699       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]]
I0904 20:23:02.682775       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981] started
I0904 20:23:02.686856       1 pv_controller.go:1341] isVolumeReleased[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is released
I0904 20:23:02.686878       1 pv_controller.go:1405] doDeleteVolume [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]
I0904 20:23:02.686946       1 pv_controller.go:1260] deletion of volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) since it's in attaching or detaching state
I0904 20:23:02.686964       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: set phase Failed
I0904 20:23:02.686977       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase Failed already set
E0904 20:23:02.687037       1 goroutinemap.go:150] Operation for "delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]" failed. No retries permitted until 2022-09-04 20:23:04.687012059 +0000 UTC m=+480.614386990 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) since it's in attaching or detaching state"
I0904 20:23:04.652313       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) returned with <nil>
I0904 20:23:04.652372       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981) succeeded
I0904 20:23:04.652382       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981 was detached from node:capz-7sh698-mp-0000001
I0904 20:23:04.652456       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981") on node "capz-7sh698-mp-0000001" 
I0904 20:23:07.362987       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="77.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46580" resp=200
I0904 20:23:10.949888       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0904 20:23:13.624672       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 0 items received
I0904 20:23:15.583801       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
I0904 20:23:17.362848       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="86.5µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:51552" resp=200
I0904 20:23:17.599263       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:23:17.605434       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:23:17.682423       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:23:17.682521       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" with version 1508
I0904 20:23:17.682596       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase: Failed, bound to: "azuredisk-5466/pvc-8n97s (uid: fd484ad7-b7f0-41f6-8275-1bcb99060981)", boundByController: true
I0904 20:23:17.682672       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is bound to claim azuredisk-5466/pvc-8n97s
I0904 20:23:17.682721       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: claim azuredisk-5466/pvc-8n97s not found
I0904 20:23:17.682738       1 pv_controller.go:1108] reclaimVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: policy is Delete
I0904 20:23:17.682773       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]]
I0904 20:23:17.682842       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981] started
I0904 20:23:17.694109       1 pv_controller.go:1341] isVolumeReleased[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is released
... skipping 4 lines ...
I0904 20:23:23.584293       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981
I0904 20:23:23.584410       1 pv_controller.go:1436] volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" deleted
I0904 20:23:23.584462       1 pv_controller.go:1284] deleteVolumeOperation [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: success
I0904 20:23:23.594398       1 pv_protection_controller.go:205] Got event on PV pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981
I0904 20:23:23.594429       1 pv_protection_controller.go:125] Processing PV pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981
I0904 20:23:23.594836       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" with version 1574
I0904 20:23:23.594876       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: phase: Failed, bound to: "azuredisk-5466/pvc-8n97s (uid: fd484ad7-b7f0-41f6-8275-1bcb99060981)", boundByController: true
I0904 20:23:23.595013       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: volume is bound to claim azuredisk-5466/pvc-8n97s
I0904 20:23:23.595039       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: claim azuredisk-5466/pvc-8n97s not found
I0904 20:23:23.595214       1 pv_controller.go:1108] reclaimVolume[pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981]: policy is Delete
I0904 20:23:23.595236       1 pv_controller.go:1753] scheduleOperation[delete-pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981[92fd060e-6437-4e25-8ccf-9b2ca31c2c92]]
I0904 20:23:23.595268       1 pv_controller.go:1232] deleteVolumeOperation [pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981] started
I0904 20:23:23.599034       1 pv_controller.go:1244] Volume "pvc-fd484ad7-b7f0-41f6-8275-1bcb99060981" is already being deleted
... skipping 105 lines ...
I0904 20:23:29.836739       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-7sh698-mp-0000001, refreshing the cache
I0904 20:23:29.898459       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" to node "capz-7sh698-mp-0000001".
I0904 20:23:29.945229       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" lun 0 to node "capz-7sh698-mp-0000001".
I0904 20:23:29.945613       1 azure_controller_vmss.go:101] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - attach disk(capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) with DiskEncryptionSetID()
I0904 20:23:31.141365       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5466
I0904 20:23:31.162281       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-m2tkl, uid 35258a99-c601-4a47-be77-aa8b17675a11, event type delete
E0904 20:23:31.180742       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-sq64k" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0904 20:23:31.190801       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5466, name kube-root-ca.crt, uid 680f7117-97b6-4288-8715-9b1522d03e0e, event type delete
I0904 20:23:31.193846       1 publisher.go:181] Finished syncing namespace "azuredisk-5466" (3.242624ms)
I0904 20:23:31.217037       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5466/default), service account deleted, removing tokens
I0904 20:23:31.217283       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (3.2µs)
I0904 20:23:31.217308       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5466, name default, uid 327599dc-3e43-4baa-afb2-117e7a5b9246, event type delete
I0904 20:23:31.237400       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5466, name azuredisk-volume-tester-zzskn.1711c1e73c105c69, uid 25b5f389-f69f-46bf-a512-ef6f25f65879, event type delete
... skipping 164 lines ...
I0904 20:23:51.411125       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: claim azuredisk-2790/pvc-9mkxc not found
I0904 20:23:51.411211       1 pv_controller.go:1108] reclaimVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: policy is Delete
I0904 20:23:51.411232       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]]
I0904 20:23:51.411239       1 pv_controller.go:1764] operation "delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]" is already running, skipping
I0904 20:23:51.412944       1 pv_controller.go:1341] isVolumeReleased[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is released
I0904 20:23:51.412959       1 pv_controller.go:1405] doDeleteVolume [pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]
I0904 20:23:51.443102       1 pv_controller.go:1260] deletion of volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:23:51.443132       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: set phase Failed
I0904 20:23:51.443141       1 pv_controller.go:858] updating PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: set phase Failed
I0904 20:23:51.456201       1 pv_protection_controller.go:205] Got event on PV pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077
I0904 20:23:51.456415       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" with version 1673
I0904 20:23:51.456584       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: phase: Failed, bound to: "azuredisk-2790/pvc-9mkxc (uid: f6e3cb28-d617-4dd6-a883-74cb696e7077)", boundByController: true
I0904 20:23:51.456738       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is bound to claim azuredisk-2790/pvc-9mkxc
I0904 20:23:51.456893       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: claim azuredisk-2790/pvc-9mkxc not found
I0904 20:23:51.456978       1 pv_controller.go:1108] reclaimVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: policy is Delete
I0904 20:23:51.457116       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]]
I0904 20:23:51.457247       1 pv_controller.go:1764] operation "delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]" is already running, skipping
I0904 20:23:51.457881       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" with version 1673
I0904 20:23:51.457908       1 pv_controller.go:879] volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" entered phase "Failed"
I0904 20:23:51.457917       1 pv_controller.go:901] volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
E0904 20:23:51.458053       1 goroutinemap.go:150] Operation for "delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]" failed. No retries permitted until 2022-09-04 20:23:51.958020478 +0000 UTC m=+527.885395309 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:23:51.458231       1 event.go:291] "Event occurred" object="pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:23:54.320525       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:23:57.362194       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.001µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:55154" resp=200
I0904 20:23:57.706702       1 gc_controller.go:161] GC'ing orphaned
I0904 20:23:57.706743       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:23:59.274451       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
... skipping 8 lines ...
I0904 20:23:59.454640       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077"
I0904 20:23:59.454839       1 azure_controller_vmss.go:175] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077)
I0904 20:23:59.673850       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:24:02.607954       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:24:02.685013       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:24:02.685094       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" with version 1673
I0904 20:24:02.685157       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: phase: Failed, bound to: "azuredisk-2790/pvc-9mkxc (uid: f6e3cb28-d617-4dd6-a883-74cb696e7077)", boundByController: true
I0904 20:24:02.685192       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is bound to claim azuredisk-2790/pvc-9mkxc
I0904 20:24:02.685216       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: claim azuredisk-2790/pvc-9mkxc not found
I0904 20:24:02.685229       1 pv_controller.go:1108] reclaimVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: policy is Delete
I0904 20:24:02.685246       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]]
I0904 20:24:02.685291       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077] started
I0904 20:24:02.688175       1 pv_controller.go:1341] isVolumeReleased[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is released
I0904 20:24:02.688197       1 pv_controller.go:1405] doDeleteVolume [pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]
I0904 20:24:02.688348       1 pv_controller.go:1260] deletion of volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) since it's in attaching or detaching state
I0904 20:24:02.688367       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: set phase Failed
I0904 20:24:02.688377       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: phase Failed already set
E0904 20:24:02.688485       1 goroutinemap.go:150] Operation for "delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]" failed. No retries permitted until 2022-09-04 20:24:03.688456794 +0000 UTC m=+539.615831625 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077) since it's in attaching or detaching state"
I0904 20:24:02.793493       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:24:07.363041       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="77.8µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48904" resp=200
I0904 20:24:09.234246       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:24:09.234279       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077 to the node "capz-7sh698-mp-0000001" mounted false
I0904 20:24:12.578309       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 26 items received
I0904 20:24:12.796213       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
... skipping 4 lines ...
I0904 20:24:17.323400       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:24:17.362425       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="71.5µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:44834" resp=200
I0904 20:24:17.600640       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:24:17.608788       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:24:17.686126       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:24:17.686402       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" with version 1673
I0904 20:24:17.686533       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: phase: Failed, bound to: "azuredisk-2790/pvc-9mkxc (uid: f6e3cb28-d617-4dd6-a883-74cb696e7077)", boundByController: true
I0904 20:24:17.686617       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is bound to claim azuredisk-2790/pvc-9mkxc
I0904 20:24:17.686643       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: claim azuredisk-2790/pvc-9mkxc not found
I0904 20:24:17.686658       1 pv_controller.go:1108] reclaimVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: policy is Delete
I0904 20:24:17.686676       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]]
I0904 20:24:17.686714       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077] started
I0904 20:24:17.698787       1 pv_controller.go:1341] isVolumeReleased[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is released
... skipping 4 lines ...
I0904 20:24:22.951827       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077
I0904 20:24:22.951864       1 pv_controller.go:1436] volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" deleted
I0904 20:24:22.951877       1 pv_controller.go:1284] deleteVolumeOperation [pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: success
I0904 20:24:22.957252       1 pv_protection_controller.go:205] Got event on PV pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077
I0904 20:24:22.957377       1 pv_protection_controller.go:125] Processing PV pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077
I0904 20:24:22.957279       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" with version 1722
I0904 20:24:22.957487       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: phase: Failed, bound to: "azuredisk-2790/pvc-9mkxc (uid: f6e3cb28-d617-4dd6-a883-74cb696e7077)", boundByController: true
I0904 20:24:22.957518       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: volume is bound to claim azuredisk-2790/pvc-9mkxc
I0904 20:24:22.957534       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: claim azuredisk-2790/pvc-9mkxc not found
I0904 20:24:22.957542       1 pv_controller.go:1108] reclaimVolume[pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077]: policy is Delete
I0904 20:24:22.957558       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077[7181b086-f813-42fa-ba35-c3e76feb0bc5]]
I0904 20:24:22.957619       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077] started
I0904 20:24:22.960300       1 pv_controller.go:1244] Volume "pvc-f6e3cb28-d617-4dd6-a883-74cb696e7077" is already being deleted
... skipping 108 lines ...
I0904 20:24:31.595487       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" lun 0 to node "capz-7sh698-mp-0000000".
I0904 20:24:31.595535       1 azure_controller_vmss.go:101] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000000) - attach disk(capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) with DiskEncryptionSetID()
I0904 20:24:31.836328       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0904 20:24:31.855316       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2790, name kube-root-ca.crt, uid c620d5b4-e024-4ead-802a-92844714d551, event type delete
I0904 20:24:31.857968       1 publisher.go:181] Finished syncing namespace "azuredisk-2790" (2.853819ms)
I0904 20:24:31.861331       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-5h98v, uid 56a8566a-5631-4313-b995-c6b702570b47, event type delete
E0904 20:24:31.874501       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-tkn52" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0904 20:24:31.875753       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-c98mc.1711c1f8532360c9, uid 8905772d-f519-4486-858f-9b7c7cebbc98, event type delete
I0904 20:24:31.879014       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-c98mc.1711c1f99b5523bd, uid fed21118-d6ff-4420-8311-fab1ef620cb9, event type delete
I0904 20:24:31.884279       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-c98mc.1711c1fc733e2f79, uid 189e0b43-34bb-4660-9cb6-e16a459a297c, event type delete
I0904 20:24:31.886666       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-c98mc.1711c1fc733eac16, uid 60a3ec74-ed4a-47b9-b1b6-3f43b06a26b1, event type delete
I0904 20:24:31.889304       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-c98mc.1711c1fc97492ef0, uid 52f1c1cc-39a5-46ff-afd7-9f161f9b9aa2, event type delete
I0904 20:24:31.892292       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-c98mc.1711c1fcf63c0776, uid ab434b80-5e0f-432b-88bc-a84097d7c17f, event type delete
... skipping 153 lines ...
I0904 20:24:52.125403       1 pv_controller.go:1108] reclaimVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: policy is Delete
I0904 20:24:52.125215       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd] started
I0904 20:24:52.125642       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]]
I0904 20:24:52.125695       1 pv_controller.go:1764] operation "delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]" is already running, skipping
I0904 20:24:52.128398       1 pv_controller.go:1341] isVolumeReleased[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is released
I0904 20:24:52.128418       1 pv_controller.go:1405] doDeleteVolume [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]
I0904 20:24:52.270544       1 pv_controller.go:1260] deletion of volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
I0904 20:24:52.270570       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: set phase Failed
I0904 20:24:52.270580       1 pv_controller.go:858] updating PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: set phase Failed
I0904 20:24:52.274679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" with version 1820
I0904 20:24:52.274801       1 pv_controller.go:879] volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" entered phase "Failed"
I0904 20:24:52.274896       1 pv_controller.go:901] volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
E0904 20:24:52.275038       1 goroutinemap.go:150] Operation for "delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]" failed. No retries permitted until 2022-09-04 20:24:52.775001715 +0000 UTC m=+588.702376646 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:24:52.275403       1 pv_protection_controller.go:205] Got event on PV pvc-3830149c-1f6b-4eec-886e-f55736bb11cd
I0904 20:24:52.275463       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" with version 1820
I0904 20:24:52.275507       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: phase: Failed, bound to: "azuredisk-5356/pvc-ptkps (uid: 3830149c-1f6b-4eec-886e-f55736bb11cd)", boundByController: true
I0904 20:24:52.275605       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is bound to claim azuredisk-5356/pvc-ptkps
I0904 20:24:52.275693       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: claim azuredisk-5356/pvc-ptkps not found
I0904 20:24:52.275801       1 event.go:291] "Event occurred" object="pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:24:52.275822       1 pv_controller.go:1108] reclaimVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: policy is Delete
I0904 20:24:52.275926       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]]
I0904 20:24:52.276084       1 pv_controller.go:1766] operation "delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]" postponed due to exponential backoff
... skipping 13 lines ...
I0904 20:24:57.708584       1 gc_controller.go:161] GC'ing orphaned
I0904 20:24:57.708617       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:24:57.802324       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 20:25:02.610631       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:25:02.689763       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:25:02.689834       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" with version 1820
I0904 20:25:02.690073       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: phase: Failed, bound to: "azuredisk-5356/pvc-ptkps (uid: 3830149c-1f6b-4eec-886e-f55736bb11cd)", boundByController: true
I0904 20:25:02.690119       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is bound to claim azuredisk-5356/pvc-ptkps
I0904 20:25:02.690142       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: claim azuredisk-5356/pvc-ptkps not found
I0904 20:25:02.690158       1 pv_controller.go:1108] reclaimVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: policy is Delete
I0904 20:25:02.690177       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]]
I0904 20:25:02.690265       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd] started
I0904 20:25:02.693609       1 pv_controller.go:1341] isVolumeReleased[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is released
I0904 20:25:02.693632       1 pv_controller.go:1405] doDeleteVolume [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]
I0904 20:25:02.693670       1 pv_controller.go:1260] deletion of volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) since it's in attaching or detaching state
I0904 20:25:02.693684       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: set phase Failed
I0904 20:25:02.693701       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: phase Failed already set
E0904 20:25:02.693736       1 goroutinemap.go:150] Operation for "delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]" failed. No retries permitted until 2022-09-04 20:25:03.693710728 +0000 UTC m=+599.621085659 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) since it's in attaching or detaching state"
I0904 20:25:04.322512       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:25:05.323090       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:25:07.362478       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="82.6µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46060" resp=200
I0904 20:25:09.909561       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) returned with <nil>
I0904 20:25:09.909614       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd) succeeded
I0904 20:25:09.909652       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd was detached from node:capz-7sh698-mp-0000000
... skipping 8 lines ...
I0904 20:25:17.597267       1 controller.go:790] Finished updateLoadBalancerHosts
I0904 20:25:17.597284       1 controller.go:731] It took 0.000180701 seconds to finish nodeSyncInternal
I0904 20:25:17.602189       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:25:17.611353       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:25:17.690262       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:25:17.690338       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" with version 1820
I0904 20:25:17.690410       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: phase: Failed, bound to: "azuredisk-5356/pvc-ptkps (uid: 3830149c-1f6b-4eec-886e-f55736bb11cd)", boundByController: true
I0904 20:25:17.690470       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is bound to claim azuredisk-5356/pvc-ptkps
I0904 20:25:17.690574       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: claim azuredisk-5356/pvc-ptkps not found
I0904 20:25:17.690609       1 pv_controller.go:1108] reclaimVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: policy is Delete
I0904 20:25:17.690656       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]]
I0904 20:25:17.690711       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd] started
I0904 20:25:17.699173       1 pv_controller.go:1341] isVolumeReleased[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is released
... skipping 10 lines ...
I0904 20:25:23.427912       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd
I0904 20:25:23.427958       1 pv_controller.go:1436] volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" deleted
I0904 20:25:23.427973       1 pv_controller.go:1284] deleteVolumeOperation [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: success
I0904 20:25:23.434448       1 pv_protection_controller.go:205] Got event on PV pvc-3830149c-1f6b-4eec-886e-f55736bb11cd
I0904 20:25:23.434483       1 pv_protection_controller.go:125] Processing PV pvc-3830149c-1f6b-4eec-886e-f55736bb11cd
I0904 20:25:23.434784       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" with version 1869
I0904 20:25:23.434818       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: phase: Failed, bound to: "azuredisk-5356/pvc-ptkps (uid: 3830149c-1f6b-4eec-886e-f55736bb11cd)", boundByController: true
I0904 20:25:23.434843       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: volume is bound to claim azuredisk-5356/pvc-ptkps
I0904 20:25:23.434859       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: claim azuredisk-5356/pvc-ptkps not found
I0904 20:25:23.434885       1 pv_controller.go:1108] reclaimVolume[pvc-3830149c-1f6b-4eec-886e-f55736bb11cd]: policy is Delete
I0904 20:25:23.434901       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3830149c-1f6b-4eec-886e-f55736bb11cd[1777f3af-b141-4914-bb40-0ea8350767db]]
I0904 20:25:23.434938       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3830149c-1f6b-4eec-886e-f55736bb11cd] started
I0904 20:25:23.439100       1 pv_controller.go:1244] Volume "pvc-3830149c-1f6b-4eec-886e-f55736bb11cd" is already being deleted
... skipping 881 lines ...
I0904 20:27:09.177785       1 pv_controller.go:1108] reclaimVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: policy is Delete
I0904 20:27:09.177913       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]]
I0904 20:27:09.178041       1 pv_controller.go:1764] operation "delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]" is already running, skipping
I0904 20:27:09.177294       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df] started
I0904 20:27:09.182408       1 pv_controller.go:1341] isVolumeReleased[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: volume is released
I0904 20:27:09.182567       1 pv_controller.go:1405] doDeleteVolume [pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]
I0904 20:27:09.310094       1 pv_controller.go:1260] deletion of volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:27:09.310120       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: set phase Failed
I0904 20:27:09.310129       1 pv_controller.go:858] updating PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: set phase Failed
I0904 20:27:09.319744       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" with version 2123
I0904 20:27:09.320074       1 pv_controller.go:879] volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" entered phase "Failed"
I0904 20:27:09.320096       1 pv_controller.go:901] volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:27:09.319768       1 pv_protection_controller.go:205] Got event on PV pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df
I0904 20:27:09.319787       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" with version 2123
I0904 20:27:09.320459       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: phase: Failed, bound to: "azuredisk-5194/pvc-fg4r8 (uid: 7a387835-dcc5-4ef4-804f-b5abd7ae60df)", boundByController: true
I0904 20:27:09.320588       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: volume is bound to claim azuredisk-5194/pvc-fg4r8
I0904 20:27:09.320687       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: claim azuredisk-5194/pvc-fg4r8 not found
I0904 20:27:09.320781       1 pv_controller.go:1108] reclaimVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: policy is Delete
I0904 20:27:09.320880       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]]
E0904 20:27:09.320406       1 goroutinemap.go:150] Operation for "delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]" failed. No retries permitted until 2022-09-04 20:27:09.820231874 +0000 UTC m=+725.747606705 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:27:09.321073       1 pv_controller.go:1766] operation "delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]" postponed due to exponential backoff
I0904 20:27:09.321224       1 event.go:291] "Event occurred" object="pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:27:09.415140       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:27:09.415183       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 to the node "capz-7sh698-mp-0000001" mounted true
I0904 20:27:09.415193       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df to the node "capz-7sh698-mp-0000001" mounted false
I0904 20:27:09.436758       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152\"}]}}" for node "capz-7sh698-mp-0000001" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 0}]
... skipping 24 lines ...
I0904 20:27:17.696739       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is bound to claim azuredisk-5194/pvc-f8chf
I0904 20:27:17.696757       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: claim azuredisk-5194/pvc-f8chf found: phase: Bound, bound to: "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152", bindCompleted: true, boundByController: true
I0904 20:27:17.696771       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: all is bound
I0904 20:27:17.696779       1 pv_controller.go:858] updating PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: set phase Bound
I0904 20:27:17.696788       1 pv_controller.go:861] updating PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase Bound already set
I0904 20:27:17.696803       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" with version 2123
I0904 20:27:17.696821       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: phase: Failed, bound to: "azuredisk-5194/pvc-fg4r8 (uid: 7a387835-dcc5-4ef4-804f-b5abd7ae60df)", boundByController: true
I0904 20:27:17.696839       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: volume is bound to claim azuredisk-5194/pvc-fg4r8
I0904 20:27:17.696854       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: claim azuredisk-5194/pvc-fg4r8 not found
I0904 20:27:17.696861       1 pv_controller.go:1108] reclaimVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: policy is Delete
I0904 20:27:17.696878       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]]
I0904 20:27:17.696903       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df] started
I0904 20:27:17.697069       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-tb4zj" with version 1898
... skipping 27 lines ...
I0904 20:27:17.697419       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-f8chf] status: phase Bound already set
I0904 20:27:17.697427       1 pv_controller.go:1038] volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" bound to claim "azuredisk-5194/pvc-f8chf"
I0904 20:27:17.697440       1 pv_controller.go:1039] volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-f8chf (uid: 0c9f7d5c-ea00-4d13-bc87-c24f176ec152)", boundByController: true
I0904 20:27:17.697453       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-f8chf" status after binding: phase: Bound, bound to: "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152", bindCompleted: true, boundByController: true
I0904 20:27:17.708864       1 pv_controller.go:1341] isVolumeReleased[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: volume is released
I0904 20:27:17.708883       1 pv_controller.go:1405] doDeleteVolume [pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]
I0904 20:27:17.708920       1 pv_controller.go:1260] deletion of volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) since it's in attaching or detaching state
I0904 20:27:17.708930       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: set phase Failed
I0904 20:27:17.708942       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: phase Failed already set
E0904 20:27:17.708976       1 goroutinemap.go:150] Operation for "delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]" failed. No retries permitted until 2022-09-04 20:27:18.708952318 +0000 UTC m=+734.636327149 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) since it's in attaching or detaching state"
I0904 20:27:17.713979       1 gc_controller.go:161] GC'ing orphaned
I0904 20:27:17.714001       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:27:18.365265       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:27:20.038062       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) returned with <nil>
I0904 20:27:20.038116       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df) succeeded
I0904 20:27:20.038153       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df was detached from node:capz-7sh698-mp-0000001
... skipping 15 lines ...
I0904 20:27:32.698181       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is bound to claim azuredisk-5194/pvc-f8chf
I0904 20:27:32.698267       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: claim azuredisk-5194/pvc-f8chf found: phase: Bound, bound to: "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152", bindCompleted: true, boundByController: true
I0904 20:27:32.698362       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: all is bound
I0904 20:27:32.698432       1 pv_controller.go:858] updating PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: set phase Bound
I0904 20:27:32.698584       1 pv_controller.go:861] updating PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase Bound already set
I0904 20:27:32.698757       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" with version 2123
I0904 20:27:32.698955       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: phase: Failed, bound to: "azuredisk-5194/pvc-fg4r8 (uid: 7a387835-dcc5-4ef4-804f-b5abd7ae60df)", boundByController: true
I0904 20:27:32.699153       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: volume is bound to claim azuredisk-5194/pvc-fg4r8
I0904 20:27:32.699350       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: claim azuredisk-5194/pvc-fg4r8 not found
I0904 20:27:32.699509       1 pv_controller.go:1108] reclaimVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: policy is Delete
I0904 20:27:32.699676       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]]
I0904 20:27:32.699849       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df] started
I0904 20:27:32.697185       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-tb4zj" with version 1898
... skipping 36 lines ...
I0904 20:27:38.409427       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df
I0904 20:27:38.409468       1 pv_controller.go:1436] volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" deleted
I0904 20:27:38.409483       1 pv_controller.go:1284] deleteVolumeOperation [pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: success
I0904 20:27:38.421200       1 pv_protection_controller.go:205] Got event on PV pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df
I0904 20:27:38.421237       1 pv_protection_controller.go:125] Processing PV pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df
I0904 20:27:38.421638       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df" with version 2167
I0904 20:27:38.421676       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: phase: Failed, bound to: "azuredisk-5194/pvc-fg4r8 (uid: 7a387835-dcc5-4ef4-804f-b5abd7ae60df)", boundByController: true
I0904 20:27:38.421708       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: volume is bound to claim azuredisk-5194/pvc-fg4r8
I0904 20:27:38.421727       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: claim azuredisk-5194/pvc-fg4r8 not found
I0904 20:27:38.421735       1 pv_controller.go:1108] reclaimVolume[pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df]: policy is Delete
I0904 20:27:38.421752       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]]
I0904 20:27:38.421758       1 pv_controller.go:1764] operation "delete-pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df[5f957651-c32d-4c56-b528-18eeefd40b17]" is already running, skipping
I0904 20:27:38.430662       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-7a387835-dcc5-4ef4-804f-b5abd7ae60df
... skipping 188 lines ...
I0904 20:28:11.612653       1 pv_controller.go:1108] reclaimVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: policy is Delete
I0904 20:28:11.612744       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]]
I0904 20:28:11.612832       1 pv_controller.go:1764] operation "delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]" is already running, skipping
I0904 20:28:11.612203       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152] started
I0904 20:28:11.614415       1 pv_controller.go:1341] isVolumeReleased[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is released
I0904 20:28:11.614472       1 pv_controller.go:1405] doDeleteVolume [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]
I0904 20:28:11.645710       1 pv_controller.go:1260] deletion of volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:28:11.645735       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: set phase Failed
I0904 20:28:11.645746       1 pv_controller.go:858] updating PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: set phase Failed
I0904 20:28:11.649115       1 pv_protection_controller.go:205] Got event on PV pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152
I0904 20:28:11.649234       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" with version 2229
I0904 20:28:11.649545       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase: Failed, bound to: "azuredisk-5194/pvc-f8chf (uid: 0c9f7d5c-ea00-4d13-bc87-c24f176ec152)", boundByController: true
I0904 20:28:11.649693       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is bound to claim azuredisk-5194/pvc-f8chf
I0904 20:28:11.649812       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: claim azuredisk-5194/pvc-f8chf not found
I0904 20:28:11.649930       1 pv_controller.go:1108] reclaimVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: policy is Delete
I0904 20:28:11.650032       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]]
I0904 20:28:11.650122       1 pv_controller.go:1764] operation "delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]" is already running, skipping
I0904 20:28:11.650672       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" with version 2229
I0904 20:28:11.650853       1 pv_controller.go:879] volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" entered phase "Failed"
I0904 20:28:11.650949       1 pv_controller.go:901] volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
E0904 20:28:11.651092       1 goroutinemap.go:150] Operation for "delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]" failed. No retries permitted until 2022-09-04 20:28:12.151061744 +0000 UTC m=+788.078436675 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:28:11.651413       1 event.go:291] "Event occurred" object="pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:28:13.578708       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0904 20:28:15.429118       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-7sh698-mp-0000001, refreshing the cache
I0904 20:28:15.565076       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 0 items received
I0904 20:28:17.363053       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="88.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:49896" resp=200
I0904 20:28:17.606884       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 5 lines ...
I0904 20:28:17.699209       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: claim azuredisk-5194/pvc-tb4zj found: phase: Bound, bound to: "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09", bindCompleted: true, boundByController: true
I0904 20:28:17.699223       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: all is bound
I0904 20:28:17.699235       1 pv_controller.go:858] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: set phase Bound
I0904 20:28:17.699245       1 pv_controller.go:861] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase Bound already set
I0904 20:28:17.699265       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" with version 2229
I0904 20:28:17.699084       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-tb4zj" with version 1898
I0904 20:28:17.699281       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase: Failed, bound to: "azuredisk-5194/pvc-f8chf (uid: 0c9f7d5c-ea00-4d13-bc87-c24f176ec152)", boundByController: true
I0904 20:28:17.699290       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj]: phase: Bound, bound to: "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09", bindCompleted: true, boundByController: true
I0904 20:28:17.699300       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is bound to claim azuredisk-5194/pvc-f8chf
I0904 20:28:17.699315       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: claim azuredisk-5194/pvc-f8chf not found
I0904 20:28:17.699318       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj]: volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" found: phase: Bound, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:28:17.699322       1 pv_controller.go:1108] reclaimVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: policy is Delete
I0904 20:28:17.699327       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj]: claim is already correctly bound
... skipping 12 lines ...
I0904 20:28:17.699449       1 pv_controller.go:1039] volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:28:17.699462       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-tb4zj" status after binding: phase: Bound, bound to: "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09", bindCompleted: true, boundByController: true
I0904 20:28:17.710787       1 pv_controller.go:1341] isVolumeReleased[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is released
I0904 20:28:17.710808       1 pv_controller.go:1405] doDeleteVolume [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]
I0904 20:28:17.714870       1 gc_controller.go:161] GC'ing orphaned
I0904 20:28:17.714894       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:28:17.744034       1 pv_controller.go:1260] deletion of volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:28:17.744064       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: set phase Failed
I0904 20:28:17.744072       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase Failed already set
E0904 20:28:17.744110       1 goroutinemap.go:150] Operation for "delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]" failed. No retries permitted until 2022-09-04 20:28:18.744081102 +0000 UTC m=+794.671456033 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:28:18.400339       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:28:18.582656       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 0 items received
I0904 20:28:19.516181       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:28:19.516216       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 to the node "capz-7sh698-mp-0000001" mounted false
I0904 20:28:19.572442       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:28:19.572717       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 to the node "capz-7sh698-mp-0000001" mounted false
... skipping 5 lines ...
I0904 20:28:19.629623       1 azure_controller_vmss.go:175] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152)
I0904 20:28:22.834006       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:28:27.362754       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:33946" resp=200
I0904 20:28:32.618190       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:28:32.699982       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:28:32.700087       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" with version 2229
I0904 20:28:32.700211       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase: Failed, bound to: "azuredisk-5194/pvc-f8chf (uid: 0c9f7d5c-ea00-4d13-bc87-c24f176ec152)", boundByController: true
I0904 20:28:32.700133       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-tb4zj" with version 1898
I0904 20:28:32.700258       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj]: phase: Bound, bound to: "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09", bindCompleted: true, boundByController: true
I0904 20:28:32.700296       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj]: volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" found: phase: Bound, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:28:32.700310       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj]: claim is already correctly bound
I0904 20:28:32.700320       1 pv_controller.go:1012] binding volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" to claim "azuredisk-5194/pvc-tb4zj"
I0904 20:28:32.700329       1 pv_controller.go:910] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: binding to "azuredisk-5194/pvc-tb4zj"
... skipping 18 lines ...
I0904 20:28:32.701068       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: claim azuredisk-5194/pvc-tb4zj found: phase: Bound, bound to: "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09", bindCompleted: true, boundByController: true
I0904 20:28:32.701083       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: all is bound
I0904 20:28:32.701095       1 pv_controller.go:858] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: set phase Bound
I0904 20:28:32.701111       1 pv_controller.go:861] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase Bound already set
I0904 20:28:32.705281       1 pv_controller.go:1341] isVolumeReleased[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is released
I0904 20:28:32.705303       1 pv_controller.go:1405] doDeleteVolume [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]
I0904 20:28:32.705401       1 pv_controller.go:1260] deletion of volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) since it's in attaching or detaching state
I0904 20:28:32.705462       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: set phase Failed
I0904 20:28:32.705491       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase Failed already set
E0904 20:28:32.705566       1 goroutinemap.go:150] Operation for "delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]" failed. No retries permitted until 2022-09-04 20:28:34.705536815 +0000 UTC m=+810.632911746 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) since it's in attaching or detaching state"
I0904 20:28:34.922828       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) returned with <nil>
I0904 20:28:34.922883       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152) succeeded
I0904 20:28:34.922894       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152 was detached from node:capz-7sh698-mp-0000001
I0904 20:28:34.922919       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152") on node "capz-7sh698-mp-0000001" 
I0904 20:28:37.362543       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="80.201µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36534" resp=200
I0904 20:28:37.597685       1 controller.go:272] Triggering nodeSync
... skipping 34 lines ...
I0904 20:28:47.701652       1 pv_controller.go:861] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase Bound already set
I0904 20:28:47.701677       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" with version 2229
I0904 20:28:47.701699       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-tb4zj] status: phase Bound already set
I0904 20:28:47.701735       1 pv_controller.go:1038] volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" bound to claim "azuredisk-5194/pvc-tb4zj"
I0904 20:28:47.701770       1 pv_controller.go:1039] volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:28:47.701800       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-tb4zj" status after binding: phase: Bound, bound to: "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09", bindCompleted: true, boundByController: true
I0904 20:28:47.701703       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase: Failed, bound to: "azuredisk-5194/pvc-f8chf (uid: 0c9f7d5c-ea00-4d13-bc87-c24f176ec152)", boundByController: true
I0904 20:28:47.701914       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is bound to claim azuredisk-5194/pvc-f8chf
I0904 20:28:47.702692       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: claim azuredisk-5194/pvc-f8chf not found
I0904 20:28:47.702769       1 pv_controller.go:1108] reclaimVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: policy is Delete
I0904 20:28:47.702804       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]]
I0904 20:28:47.703231       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152] started
I0904 20:28:47.728140       1 pv_controller.go:1341] isVolumeReleased[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is released
... skipping 3 lines ...
I0904 20:28:53.452937       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152
I0904 20:28:53.452974       1 pv_controller.go:1436] volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" deleted
I0904 20:28:53.452988       1 pv_controller.go:1284] deleteVolumeOperation [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: success
I0904 20:28:53.459080       1 pv_protection_controller.go:205] Got event on PV pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152
I0904 20:28:53.459111       1 pv_protection_controller.go:125] Processing PV pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152
I0904 20:28:53.459534       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" with version 2292
I0904 20:28:53.459570       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: phase: Failed, bound to: "azuredisk-5194/pvc-f8chf (uid: 0c9f7d5c-ea00-4d13-bc87-c24f176ec152)", boundByController: true
I0904 20:28:53.459597       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: volume is bound to claim azuredisk-5194/pvc-f8chf
I0904 20:28:53.459616       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: claim azuredisk-5194/pvc-f8chf not found
I0904 20:28:53.459625       1 pv_controller.go:1108] reclaimVolume[pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152]: policy is Delete
I0904 20:28:53.459640       1 pv_controller.go:1753] scheduleOperation[delete-pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152[9b477736-1983-4f12-b841-859d09a68a64]]
I0904 20:28:53.459663       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152] started
I0904 20:28:53.463754       1 pv_controller.go:1244] Volume "pvc-0c9f7d5c-ea00-4d13-bc87-c24f176ec152" is already being deleted
... skipping 175 lines ...
I0904 20:29:34.111646       1 pv_controller.go:1108] reclaimVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: policy is Delete
I0904 20:29:34.111688       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]]
I0904 20:29:34.111903       1 pv_controller.go:1764] operation "delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]" is already running, skipping
I0904 20:29:34.111570       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09] started
I0904 20:29:34.113893       1 pv_controller.go:1341] isVolumeReleased[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is released
I0904 20:29:34.113966       1 pv_controller.go:1405] doDeleteVolume [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]
I0904 20:29:34.170796       1 pv_controller.go:1260] deletion of volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
I0904 20:29:34.170822       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: set phase Failed
I0904 20:29:34.170833       1 pv_controller.go:858] updating PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: set phase Failed
I0904 20:29:34.174970       1 pv_protection_controller.go:205] Got event on PV pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09
I0904 20:29:34.175010       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" with version 2362
I0904 20:29:34.175041       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase: Failed, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:29:34.175067       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is bound to claim azuredisk-5194/pvc-tb4zj
I0904 20:29:34.175087       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: claim azuredisk-5194/pvc-tb4zj not found
I0904 20:29:34.175095       1 pv_controller.go:1108] reclaimVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: policy is Delete
I0904 20:29:34.175110       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]]
I0904 20:29:34.175118       1 pv_controller.go:1764] operation "delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]" is already running, skipping
I0904 20:29:34.175837       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" with version 2362
I0904 20:29:34.175866       1 pv_controller.go:879] volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" entered phase "Failed"
I0904 20:29:34.175878       1 pv_controller.go:901] volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted
E0904 20:29:34.175940       1 goroutinemap.go:150] Operation for "delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]" failed. No retries permitted until 2022-09-04 20:29:34.675909214 +0000 UTC m=+870.603284145 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:29:34.176202       1 event.go:291] "Event occurred" object="pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_0), could not be deleted"
I0904 20:29:34.285104       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000000"
I0904 20:29:34.285145       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 to the node "capz-7sh698-mp-0000000" mounted false
I0904 20:29:34.318372       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000000"
I0904 20:29:34.318738       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 to the node "capz-7sh698-mp-0000000" mounted false
I0904 20:29:34.318784       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-7sh698-mp-0000000" succeeded. VolumesAttached: []
... skipping 10 lines ...
I0904 20:29:45.675390       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:29:47.365955       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="117.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:39460" resp=200
I0904 20:29:47.608866       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:29:47.622522       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:29:47.703887       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:29:47.703962       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" with version 2362
I0904 20:29:47.704002       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase: Failed, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:29:47.704088       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is bound to claim azuredisk-5194/pvc-tb4zj
I0904 20:29:47.704128       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: claim azuredisk-5194/pvc-tb4zj not found
I0904 20:29:47.704171       1 pv_controller.go:1108] reclaimVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: policy is Delete
I0904 20:29:47.704219       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]]
I0904 20:29:47.704363       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09] started
I0904 20:29:47.714826       1 pv_controller.go:1341] isVolumeReleased[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is released
I0904 20:29:47.714849       1 pv_controller.go:1405] doDeleteVolume [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]
I0904 20:29:47.714888       1 pv_controller.go:1260] deletion of volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) since it's in attaching or detaching state
I0904 20:29:47.714903       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: set phase Failed
I0904 20:29:47.714918       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase Failed already set
E0904 20:29:47.714949       1 goroutinemap.go:150] Operation for "delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]" failed. No retries permitted until 2022-09-04 20:29:48.714927885 +0000 UTC m=+884.642302816 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) since it's in attaching or detaching state"
I0904 20:29:48.453666       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:29:49.594371       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) returned with <nil>
I0904 20:29:49.594426       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09) succeeded
I0904 20:29:49.594437       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09 was detached from node:capz-7sh698-mp-0000000
I0904 20:29:49.594460       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09") on node "capz-7sh698-mp-0000000" 
I0904 20:29:54.951591       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
... skipping 3 lines ...
I0904 20:29:57.722636       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:29:58.579222       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0904 20:30:00.581166       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
I0904 20:30:02.623655       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:30:02.704272       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:30:02.704517       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" with version 2362
I0904 20:30:02.704620       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase: Failed, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:30:02.704717       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is bound to claim azuredisk-5194/pvc-tb4zj
I0904 20:30:02.704748       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: claim azuredisk-5194/pvc-tb4zj not found
I0904 20:30:02.704759       1 pv_controller.go:1108] reclaimVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: policy is Delete
I0904 20:30:02.704777       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]]
I0904 20:30:02.704892       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09] started
I0904 20:30:02.721272       1 pv_controller.go:1341] isVolumeReleased[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is released
I0904 20:30:02.721299       1 pv_controller.go:1405] doDeleteVolume [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]
I0904 20:30:05.321580       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 18 items received
I0904 20:30:07.363047       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="77.7µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46580" resp=200
I0904 20:30:07.935129       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09
I0904 20:30:07.935168       1 pv_controller.go:1436] volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" deleted
I0904 20:30:07.935182       1 pv_controller.go:1284] deleteVolumeOperation [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: success
I0904 20:30:07.949822       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09" with version 2415
I0904 20:30:07.949867       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: phase: Failed, bound to: "azuredisk-5194/pvc-tb4zj (uid: eae50e56-ff98-4ea4-a6fa-2de3a86b2b09)", boundByController: true
I0904 20:30:07.949893       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: volume is bound to claim azuredisk-5194/pvc-tb4zj
I0904 20:30:07.949912       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: claim azuredisk-5194/pvc-tb4zj not found
I0904 20:30:07.949920       1 pv_controller.go:1108] reclaimVolume[pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09]: policy is Delete
I0904 20:30:07.949939       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09[fbca4651-4af9-4583-8060-c47593b7bfdd]]
I0904 20:30:07.949970       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09] started
I0904 20:30:07.949836       1 pv_protection_controller.go:205] Got event on PV pvc-eae50e56-ff98-4ea4-a6fa-2de3a86b2b09
... skipping 24 lines ...
I0904 20:30:11.852576       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457", timestamp:time.Time{wall:0xc0bd6094f2d13b46, ext:907779947829, loc:(*time.Location)(0x731ea80)}}
I0904 20:30:11.855223       1 replica_set.go:559] "Too few replicas" replicaSet="azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457" need=1 creating=1
I0904 20:30:11.856739       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-srkd5" timed out (false) [last progress check: 2022-09-04 20:30:11.852178696 +0000 UTC m=+907.779553527 - now: 2022-09-04 20:30:11.856730521 +0000 UTC m=+907.784105352]
I0904 20:30:11.857315       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5"
I0904 20:30:11.863166       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457-l692b"
I0904 20:30:11.863474       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" duration="15.984188ms"
I0904 20:30:11.863678       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-srkd5\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:30:11.863720       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" startTime="2022-09-04 20:30:11.86370156 +0000 UTC m=+907.791076391"
I0904 20:30:11.863927       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457-l692b" podUID=69cfad8e-483d-49a9-a1c5-92ff5455ed88
I0904 20:30:11.863958       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-1353/pvc-rvtlw"
I0904 20:30:11.864077       1 pvc_protection_controller.go:159] "Finished processing PVC" PVC="azuredisk-1353/pvc-rvtlw" duration="4.4µs"
I0904 20:30:11.864100       1 disruption.go:415] addPod called on pod "azuredisk-volume-tester-srkd5-5d87c9c457-l692b"
I0904 20:30:11.864121       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-srkd5-5d87c9c457-l692b, PodDisruptionBudget controller will avoid syncing.
... skipping 27 lines ...
I0904 20:30:11.887019       1 pv_controller.go:1446] provisionClaim[azuredisk-1353/pvc-rvtlw]: started
I0904 20:30:11.887127       1 pv_controller.go:1753] scheduleOperation[provision-azuredisk-1353/pvc-rvtlw[5ca9129f-35ed-4309-ac66-ab86565374d8]]
I0904 20:30:11.887232       1 pv_controller.go:1764] operation "provision-azuredisk-1353/pvc-rvtlw[5ca9129f-35ed-4309-ac66-ab86565374d8]" is already running, skipping
I0904 20:30:11.887582       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457"
I0904 20:30:11.887852       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-rvtlw" with version 2442
I0904 20:30:11.887941       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" duration="14.897683ms"
I0904 20:30:11.888259       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-srkd5\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:30:11.888379       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" startTime="2022-09-04 20:30:11.888288996 +0000 UTC m=+907.815663827"
I0904 20:30:11.890966       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457" (11.959667ms)
I0904 20:30:11.891112       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457", timestamp:time.Time{wall:0xc0bd6094f2d13b46, ext:907779947829, loc:(*time.Location)(0x731ea80)}}
I0904 20:30:11.891323       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457" (216.501µs)
I0904 20:30:11.891734       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 StorageAccountType:StandardSSD_LRS Size:10
I0904 20:30:11.893515       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" duration="5.215629ms"
I0904 20:30:11.893572       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" startTime="2022-09-04 20:30:11.893551525 +0000 UTC m=+907.820926456"
I0904 20:30:11.894196       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5"
I0904 20:30:11.896555       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" duration="2.990017ms"
I0904 20:30:11.896660       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-srkd5\": the object has been modified; please apply your changes to the latest version and try again"
I0904 20:30:11.896837       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" startTime="2022-09-04 20:30:11.896763643 +0000 UTC m=+907.824138574"
I0904 20:30:11.897193       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-srkd5" timed out (false) [last progress check: 2022-09-04 20:30:11 +0000 UTC - now: 2022-09-04 20:30:11.897188645 +0000 UTC m=+907.824563476]
I0904 20:30:11.897283       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-srkd5" for a progress check after 599s
I0904 20:30:11.897317       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" duration="542.003µs"
I0904 20:30:11.902119       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-srkd5" startTime="2022-09-04 20:30:11.902086173 +0000 UTC m=+907.829461004"
I0904 20:30:11.902541       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-srkd5" timed out (false) [last progress check: 2022-09-04 20:30:11 +0000 UTC - now: 2022-09-04 20:30:11.902537975 +0000 UTC m=+907.829912806]
... skipping 263 lines ...
I0904 20:30:36.876446       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-srkd5-5d87c9c457-284n2, PodDisruptionBudget controller will avoid syncing.
I0904 20:30:36.876554       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-srkd5-5d87c9c457-284n2"
I0904 20:30:36.876874       1 replica_set.go:439] Pod azuredisk-volume-tester-srkd5-5d87c9c457-284n2 updated, objectMeta {Name:azuredisk-volume-tester-srkd5-5d87c9c457-284n2 GenerateName:azuredisk-volume-tester-srkd5-5d87c9c457- Namespace:azuredisk-1353 SelfLink: UID:e6f4a487-77ff-49ac-80d6-4dd94e8d21a3 ResourceVersion:2534 Generation:0 CreationTimestamp:2022-09-04 20:30:36 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:5d87c9c457] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-srkd5-5d87c9c457 UID:437de03b-ede3-4f13-a4e2-cd942c7fda06 Controller:0xc0022ed00e BlockOwnerDeletion:0xc0022ed00f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:30:36 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"437de03b-ede3-4f13-a4e2-cd942c7fda06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}}]} -> {Name:azuredisk-volume-tester-srkd5-5d87c9c457-284n2 GenerateName:azuredisk-volume-tester-srkd5-5d87c9c457- Namespace:azuredisk-1353 SelfLink: UID:e6f4a487-77ff-49ac-80d6-4dd94e8d21a3 ResourceVersion:2539 Generation:0 CreationTimestamp:2022-09-04 20:30:36 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:5d87c9c457] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-srkd5-5d87c9c457 UID:437de03b-ede3-4f13-a4e2-cd942c7fda06 Controller:0xc0020e7bce BlockOwnerDeletion:0xc0020e7bcf}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 20:30:36 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"437de03b-ede3-4f13-a4e2-cd942c7fda06\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 20:30:36 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]}.
I0904 20:30:36.877290       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457", timestamp:time.Time{wall:0xc0bd609b312ddc3d, ext:932752464016, loc:(*time.Location)(0x731ea80)}}
I0904 20:30:36.877488       1 controller_utils.go:972] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457-l692b in state Running, deletion time 2022-09-04 20:31:06 +0000 UTC
I0904 20:30:36.877662       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457" (378.002µs)
W0904 20:30:36.907425       1 reconciler.go:385] Multi-Attach error for volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8") from node "capz-7sh698-mp-0000001" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457-l692b on node capz-7sh698-mp-0000000
I0904 20:30:36.907684       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-srkd5-5d87c9c457-284n2" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-5ca9129f-35ed-4309-ac66-ab86565374d8\" Volume is already used by pod(s) azuredisk-volume-tester-srkd5-5d87c9c457-l692b"
I0904 20:30:37.362426       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="65.1µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54888" resp=200
I0904 20:30:37.724087       1 gc_controller.go:161] GC'ing orphaned
I0904 20:30:37.724119       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:30:39.641726       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:30:42.855947       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:30:46.001019       1 secrets.go:73] Expired bootstrap token in kube-system/bootstrap-token-crftsz Secret: 2022-09-04T20:30:46Z
... skipping 420 lines ...
I0904 20:32:29.177427       1 pv_controller.go:1108] reclaimVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: policy is Delete
I0904 20:32:29.177438       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]]
I0904 20:32:29.177468       1 pv_controller.go:1764] operation "delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]" is already running, skipping
I0904 20:32:29.177498       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5ca9129f-35ed-4309-ac66-ab86565374d8] started
I0904 20:32:29.179757       1 pv_controller.go:1341] isVolumeReleased[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is released
I0904 20:32:29.179774       1 pv_controller.go:1405] doDeleteVolume [pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]
I0904 20:32:29.179806       1 pv_controller.go:1260] deletion of volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) since it's in attaching or detaching state
I0904 20:32:29.179820       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: set phase Failed
I0904 20:32:29.179829       1 pv_controller.go:858] updating PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: set phase Failed
I0904 20:32:29.182610       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" with version 2735
I0904 20:32:29.182635       1 pv_controller.go:879] volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" entered phase "Failed"
I0904 20:32:29.182644       1 pv_controller.go:901] volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) since it's in attaching or detaching state
E0904 20:32:29.182686       1 goroutinemap.go:150] Operation for "delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]" failed. No retries permitted until 2022-09-04 20:32:29.682664336 +0000 UTC m=+1045.610039167 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) since it's in attaching or detaching state"
I0904 20:32:29.182931       1 event.go:291] "Event occurred" object="pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) since it's in attaching or detaching state"
I0904 20:32:29.183036       1 pv_protection_controller.go:205] Got event on PV pvc-5ca9129f-35ed-4309-ac66-ab86565374d8
I0904 20:32:29.183101       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" with version 2735
I0904 20:32:29.183200       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: phase: Failed, bound to: "azuredisk-1353/pvc-rvtlw (uid: 5ca9129f-35ed-4309-ac66-ab86565374d8)", boundByController: true
I0904 20:32:29.183312       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is bound to claim azuredisk-1353/pvc-rvtlw
I0904 20:32:29.183371       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: claim azuredisk-1353/pvc-rvtlw not found
I0904 20:32:29.183386       1 pv_controller.go:1108] reclaimVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: policy is Delete
I0904 20:32:29.183400       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]]
I0904 20:32:29.183408       1 pv_controller.go:1766] operation "delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]" postponed due to exponential backoff
I0904 20:32:31.884434       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-kj4usw" (10.501µs)
I0904 20:32:32.629578       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:32.708568       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:32.708662       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" with version 2735
I0904 20:32:32.708727       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: phase: Failed, bound to: "azuredisk-1353/pvc-rvtlw (uid: 5ca9129f-35ed-4309-ac66-ab86565374d8)", boundByController: true
I0904 20:32:32.708768       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is bound to claim azuredisk-1353/pvc-rvtlw
I0904 20:32:32.708794       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: claim azuredisk-1353/pvc-rvtlw not found
I0904 20:32:32.708809       1 pv_controller.go:1108] reclaimVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: policy is Delete
I0904 20:32:32.708831       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]]
I0904 20:32:32.708883       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5ca9129f-35ed-4309-ac66-ab86565374d8] started
I0904 20:32:32.717226       1 pv_controller.go:1341] isVolumeReleased[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is released
I0904 20:32:32.717252       1 pv_controller.go:1405] doDeleteVolume [pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]
I0904 20:32:32.717288       1 pv_controller.go:1260] deletion of volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) since it's in attaching or detaching state
I0904 20:32:32.717302       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: set phase Failed
I0904 20:32:32.717315       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: phase Failed already set
E0904 20:32:32.717350       1 goroutinemap.go:150] Operation for "delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]" failed. No retries permitted until 2022-09-04 20:32:33.717323537 +0000 UTC m=+1049.644698868 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) since it's in attaching or detaching state"
I0904 20:32:34.100176       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 23 items received
I0904 20:32:35.306329       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) returned with <nil>
I0904 20:32:35.306391       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8) succeeded
I0904 20:32:35.306402       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8 was detached from node:capz-7sh698-mp-0000001
I0904 20:32:35.306590       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8") on node "capz-7sh698-mp-0000001" 
I0904 20:32:37.363504       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="76.101µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36598" resp=200
... skipping 4 lines ...
I0904 20:32:47.328037       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:32:47.362459       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="90.202µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:39260" resp=200
I0904 20:32:47.614235       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:47.630712       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:32:47.709442       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:32:47.709653       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" with version 2735
I0904 20:32:47.709753       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: phase: Failed, bound to: "azuredisk-1353/pvc-rvtlw (uid: 5ca9129f-35ed-4309-ac66-ab86565374d8)", boundByController: true
I0904 20:32:47.709793       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is bound to claim azuredisk-1353/pvc-rvtlw
I0904 20:32:47.709816       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: claim azuredisk-1353/pvc-rvtlw not found
I0904 20:32:47.709824       1 pv_controller.go:1108] reclaimVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: policy is Delete
I0904 20:32:47.709840       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]]
I0904 20:32:47.709875       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5ca9129f-35ed-4309-ac66-ab86565374d8] started
I0904 20:32:47.718638       1 pv_controller.go:1341] isVolumeReleased[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is released
... skipping 2 lines ...
I0904 20:32:49.042336       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8
I0904 20:32:49.042373       1 pv_controller.go:1436] volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" deleted
I0904 20:32:49.042388       1 pv_controller.go:1284] deleteVolumeOperation [pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: success
I0904 20:32:49.047325       1 pv_protection_controller.go:205] Got event on PV pvc-5ca9129f-35ed-4309-ac66-ab86565374d8
I0904 20:32:49.047363       1 pv_protection_controller.go:125] Processing PV pvc-5ca9129f-35ed-4309-ac66-ab86565374d8
I0904 20:32:49.047486       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5ca9129f-35ed-4309-ac66-ab86565374d8" with version 2766
I0904 20:32:49.047518       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: phase: Failed, bound to: "azuredisk-1353/pvc-rvtlw (uid: 5ca9129f-35ed-4309-ac66-ab86565374d8)", boundByController: true
I0904 20:32:49.047570       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: volume is bound to claim azuredisk-1353/pvc-rvtlw
I0904 20:32:49.047589       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: claim azuredisk-1353/pvc-rvtlw not found
I0904 20:32:49.047598       1 pv_controller.go:1108] reclaimVolume[pvc-5ca9129f-35ed-4309-ac66-ab86565374d8]: policy is Delete
I0904 20:32:49.047635       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]]
I0904 20:32:49.047645       1 pv_controller.go:1764] operation "delete-pvc-5ca9129f-35ed-4309-ac66-ab86565374d8[f11d95e7-8578-43a2-becd-3749f5b63192]" is already running, skipping
I0904 20:32:49.052526       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-5ca9129f-35ed-4309-ac66-ab86565374d8
... skipping 427 lines ...
I0904 20:33:12.809502       1 pv_controller.go:1039] volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-sbb4q (uid: 9eac15cc-2a90-4ac4-934d-0905a40035ef)", boundByController: true
I0904 20:33:12.809617       1 pv_controller.go:1040] claim "azuredisk-59/pvc-sbb4q" status after binding: phase: Bound, bound to: "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef", bindCompleted: true, boundByController: true
I0904 20:33:13.266973       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4376
I0904 20:33:13.298936       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4376, name kube-root-ca.crt, uid 1d9ac00e-4686-4ab6-b968-d0486d380c4f, event type delete
I0904 20:33:13.301635       1 publisher.go:181] Finished syncing namespace "azuredisk-4376" (3.01775ms)
I0904 20:33:13.306321       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4376, name default-token-sq5zr, uid 58814f33-d2fc-452a-be49-30b94d429407, event type delete
E0904 20:33:13.323654       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4376/default: secrets "default-token-8979w" is forbidden: unable to create new content in namespace azuredisk-4376 because it is being terminated
I0904 20:33:13.333270       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4376/default), service account deleted, removing tokens
I0904 20:33:13.334127       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (3.3µs)
I0904 20:33:13.334149       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4376, name default, uid c20b2443-0ac7-407d-992b-0d71956a1757, event type delete
I0904 20:33:13.418163       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (3.6µs)
I0904 20:33:13.418738       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4376, estimate: 0, errors: <nil>
I0904 20:33:13.427073       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4376" (163.213407ms)
I0904 20:33:14.098294       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7996
I0904 20:33:14.116612       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7996, name default-token-l8m2j, uid b819370b-ae26-4c87-a981-eafd0d4e388e, event type delete
E0904 20:33:14.127400       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7996/default: secrets "default-token-qnrrs" is forbidden: unable to create new content in namespace azuredisk-7996 because it is being terminated
I0904 20:33:14.129042       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7996/default), service account deleted, removing tokens
I0904 20:33:14.129125       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7996" (2.3µs)
I0904 20:33:14.129235       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7996, name default, uid f8eb3ddd-e539-441d-9893-114e97329828, event type delete
I0904 20:33:14.166163       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1 StorageAccountType:StandardSSD_LRS Size:10
I0904 20:33:14.194771       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7996, name kube-root-ca.crt, uid 0cdce6fc-f44c-4188-8abc-517100b329f9, event type delete
I0904 20:33:14.196741       1 publisher.go:181] Finished syncing namespace "azuredisk-7996" (2.153936ms)
... skipping 450 lines ...
I0904 20:33:51.088091       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]]
I0904 20:33:51.088225       1 pv_controller.go:1764] operation "delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]" is already running, skipping
I0904 20:33:51.087184       1 pv_protection_controller.go:205] Got event on PV pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef
I0904 20:33:51.087624       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef] started
I0904 20:33:51.093403       1 pv_controller.go:1341] isVolumeReleased[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: volume is released
I0904 20:33:51.093421       1 pv_controller.go:1405] doDeleteVolume [pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]
I0904 20:33:51.136316       1 pv_controller.go:1260] deletion of volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:33:51.136572       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: set phase Failed
I0904 20:33:51.136588       1 pv_controller.go:858] updating PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: set phase Failed
I0904 20:33:51.140798       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" with version 3005
I0904 20:33:51.141027       1 pv_controller.go:879] volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" entered phase "Failed"
I0904 20:33:51.141043       1 pv_controller.go:901] volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:33:51.140854       1 pv_protection_controller.go:205] Got event on PV pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef
I0904 20:33:51.140874       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" with version 3005
I0904 20:33:51.141400       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: phase: Failed, bound to: "azuredisk-59/pvc-sbb4q (uid: 9eac15cc-2a90-4ac4-934d-0905a40035ef)", boundByController: true
I0904 20:33:51.141540       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: volume is bound to claim azuredisk-59/pvc-sbb4q
I0904 20:33:51.141671       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: claim azuredisk-59/pvc-sbb4q not found
I0904 20:33:51.141797       1 pv_controller.go:1108] reclaimVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: policy is Delete
I0904 20:33:51.141913       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]]
I0904 20:33:51.141625       1 event.go:291] "Event occurred" object="pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
E0904 20:33:51.141191       1 goroutinemap.go:150] Operation for "delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]" failed. No retries permitted until 2022-09-04 20:33:51.64110171 +0000 UTC m=+1127.568476641 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:33:51.142330       1 pv_controller.go:1766] operation "delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]" postponed due to exponential backoff
I0904 20:33:53.328598       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:33:54.601376       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 15 items received
I0904 20:33:57.363052       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="63.501µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:33508" resp=200
I0904 20:33:57.730807       1 gc_controller.go:161] GC'ing orphaned
I0904 20:33:57.730846       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 67 lines ...
I0904 20:34:02.725164       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: volume is bound to claim azuredisk-59/pvc-64228
I0904 20:34:02.725208       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: claim azuredisk-59/pvc-64228 found: phase: Bound, bound to: "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa", bindCompleted: true, boundByController: true
I0904 20:34:02.725229       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: all is bound
I0904 20:34:02.725238       1 pv_controller.go:858] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: set phase Bound
I0904 20:34:02.725249       1 pv_controller.go:861] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: phase Bound already set
I0904 20:34:02.725297       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" with version 3005
I0904 20:34:02.725320       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: phase: Failed, bound to: "azuredisk-59/pvc-sbb4q (uid: 9eac15cc-2a90-4ac4-934d-0905a40035ef)", boundByController: true
I0904 20:34:02.725345       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: volume is bound to claim azuredisk-59/pvc-sbb4q
I0904 20:34:02.725390       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: claim azuredisk-59/pvc-sbb4q not found
I0904 20:34:02.725404       1 pv_controller.go:1108] reclaimVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: policy is Delete
I0904 20:34:02.725422       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]]
I0904 20:34:02.725506       1 pv_controller.go:1232] deleteVolumeOperation [pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef] started
I0904 20:34:02.725911       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" with version 2922
... skipping 2 lines ...
I0904 20:34:02.726001       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: claim azuredisk-59/pvc-8ddrz found: phase: Bound, bound to: "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1", bindCompleted: true, boundByController: true
I0904 20:34:02.726046       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: all is bound
I0904 20:34:02.726085       1 pv_controller.go:858] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: set phase Bound
I0904 20:34:02.726112       1 pv_controller.go:861] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase Bound already set
I0904 20:34:02.732405       1 pv_controller.go:1341] isVolumeReleased[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: volume is released
I0904 20:34:02.732426       1 pv_controller.go:1405] doDeleteVolume [pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]
I0904 20:34:02.732462       1 pv_controller.go:1260] deletion of volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) since it's in attaching or detaching state
I0904 20:34:02.732474       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: set phase Failed
I0904 20:34:02.732484       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: phase Failed already set
E0904 20:34:02.732518       1 goroutinemap.go:150] Operation for "delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]" failed. No retries permitted until 2022-09-04 20:34:03.73249302 +0000 UTC m=+1139.659867851 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) since it's in attaching or detaching state"
I0904 20:34:02.887347       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:34:07.362876       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="86.301µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:60930" resp=200
I0904 20:34:13.249895       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 8 items received
I0904 20:34:15.259656       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) returned with <nil>
I0904 20:34:15.259722       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef) succeeded
I0904 20:34:15.259735       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef was detached from node:capz-7sh698-mp-0000001
... skipping 15 lines ...
I0904 20:34:17.725527       1 pv_controller.go:858] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: set phase Bound
I0904 20:34:17.725536       1 pv_controller.go:861] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: phase Bound already set
I0904 20:34:17.725540       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-8ddrz]: volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" found: phase: Bound, bound to: "azuredisk-59/pvc-8ddrz (uid: df53fbd7-d88a-4d21-b48d-69988d0201d1)", boundByController: true
I0904 20:34:17.725550       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-8ddrz]: claim is already correctly bound
I0904 20:34:17.725550       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" with version 3005
I0904 20:34:17.725562       1 pv_controller.go:1012] binding volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" to claim "azuredisk-59/pvc-8ddrz"
I0904 20:34:17.725570       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: phase: Failed, bound to: "azuredisk-59/pvc-sbb4q (uid: 9eac15cc-2a90-4ac4-934d-0905a40035ef)", boundByController: true
I0904 20:34:17.725571       1 pv_controller.go:910] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: binding to "azuredisk-59/pvc-8ddrz"
I0904 20:34:17.725589       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: volume is bound to claim azuredisk-59/pvc-sbb4q
I0904 20:34:17.725593       1 pv_controller.go:922] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: already bound to "azuredisk-59/pvc-8ddrz"
I0904 20:34:17.725601       1 pv_controller.go:858] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: set phase Bound
I0904 20:34:17.725609       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: claim azuredisk-59/pvc-sbb4q not found
I0904 20:34:17.725610       1 pv_controller.go:861] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase Bound already set
... skipping 41 lines ...
I0904 20:34:22.970364       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef
I0904 20:34:22.970407       1 pv_controller.go:1436] volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" deleted
I0904 20:34:22.970423       1 pv_controller.go:1284] deleteVolumeOperation [pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: success
I0904 20:34:22.980044       1 pv_protection_controller.go:205] Got event on PV pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef
I0904 20:34:22.980646       1 pv_protection_controller.go:125] Processing PV pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef
I0904 20:34:22.980235       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" with version 3057
I0904 20:34:22.981054       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: phase: Failed, bound to: "azuredisk-59/pvc-sbb4q (uid: 9eac15cc-2a90-4ac4-934d-0905a40035ef)", boundByController: true
I0904 20:34:22.981149       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: volume is bound to claim azuredisk-59/pvc-sbb4q
I0904 20:34:22.981227       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: claim azuredisk-59/pvc-sbb4q not found
I0904 20:34:22.981317       1 pv_controller.go:1108] reclaimVolume[pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef]: policy is Delete
I0904 20:34:22.981400       1 pv_controller.go:1753] scheduleOperation[delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]]
I0904 20:34:22.981474       1 pv_controller.go:1764] operation "delete-pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef[1e8553ef-2a20-4992-8ff1-03c272303bc8]" is already running, skipping
I0904 20:34:22.986337       1 pv_controller_base.go:235] volume "pvc-9eac15cc-2a90-4ac4-934d-0905a40035ef" deleted
... skipping 44 lines ...
I0904 20:34:26.566141       1 pv_controller.go:1108] reclaimVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: policy is Delete
I0904 20:34:26.566151       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]]
I0904 20:34:26.566161       1 pv_controller.go:1764] operation "delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]" is already running, skipping
I0904 20:34:26.566206       1 pv_controller.go:1232] deleteVolumeOperation [pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1] started
I0904 20:34:26.567868       1 pv_controller.go:1341] isVolumeReleased[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is released
I0904 20:34:26.567886       1 pv_controller.go:1405] doDeleteVolume [pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]
I0904 20:34:26.601885       1 pv_controller.go:1260] deletion of volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:34:26.601908       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: set phase Failed
I0904 20:34:26.601920       1 pv_controller.go:858] updating PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: set phase Failed
I0904 20:34:26.606024       1 pv_protection_controller.go:205] Got event on PV pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1
I0904 20:34:26.606681       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" with version 3067
I0904 20:34:26.606983       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase: Failed, bound to: "azuredisk-59/pvc-8ddrz (uid: df53fbd7-d88a-4d21-b48d-69988d0201d1)", boundByController: true
I0904 20:34:26.607155       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is bound to claim azuredisk-59/pvc-8ddrz
I0904 20:34:26.607434       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: claim azuredisk-59/pvc-8ddrz not found
I0904 20:34:26.607583       1 pv_controller.go:1108] reclaimVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: policy is Delete
I0904 20:34:26.607753       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]]
I0904 20:34:26.607913       1 pv_controller.go:1764] operation "delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]" is already running, skipping
I0904 20:34:26.608357       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" with version 3067
I0904 20:34:26.608525       1 pv_controller.go:879] volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" entered phase "Failed"
I0904 20:34:26.608690       1 pv_controller.go:901] volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
E0904 20:34:26.608946       1 goroutinemap.go:150] Operation for "delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]" failed. No retries permitted until 2022-09-04 20:34:27.10887169 +0000 UTC m=+1163.036246521 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:34:26.609401       1 event.go:291] "Event occurred" object="pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:34:27.362989       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="73.601µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:37246" resp=200
I0904 20:34:30.693889       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa) returned with <nil>
I0904 20:34:30.693956       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa) succeeded
I0904 20:34:30.693969       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa was detached from node:capz-7sh698-mp-0000001
I0904 20:34:30.693994       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa") on node "capz-7sh698-mp-0000001" 
... skipping 9 lines ...
I0904 20:34:32.726205       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: claim azuredisk-59/pvc-64228 found: phase: Bound, bound to: "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa", bindCompleted: true, boundByController: true
I0904 20:34:32.726220       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: all is bound
I0904 20:34:32.726227       1 pv_controller.go:858] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: set phase Bound
I0904 20:34:32.726235       1 pv_controller.go:861] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: phase Bound already set
I0904 20:34:32.726242       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-59/pvc-64228]: phase: Bound, bound to: "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa", bindCompleted: true, boundByController: true
I0904 20:34:32.726249       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" with version 3067
I0904 20:34:32.726268       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase: Failed, bound to: "azuredisk-59/pvc-8ddrz (uid: df53fbd7-d88a-4d21-b48d-69988d0201d1)", boundByController: true
I0904 20:34:32.726273       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-64228]: volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" found: phase: Bound, bound to: "azuredisk-59/pvc-64228 (uid: a089d844-311b-44f7-8fc2-e1a3557e49fa)", boundByController: true
I0904 20:34:32.726282       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-64228]: claim is already correctly bound
I0904 20:34:32.726288       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is bound to claim azuredisk-59/pvc-8ddrz
I0904 20:34:32.726295       1 pv_controller.go:1012] binding volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" to claim "azuredisk-59/pvc-64228"
I0904 20:34:32.726305       1 pv_controller.go:910] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: binding to "azuredisk-59/pvc-64228"
I0904 20:34:32.726308       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: claim azuredisk-59/pvc-8ddrz not found
... skipping 9 lines ...
I0904 20:34:32.726399       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-64228] status: phase Bound already set
I0904 20:34:32.726418       1 pv_controller.go:1038] volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" bound to claim "azuredisk-59/pvc-64228"
I0904 20:34:32.726434       1 pv_controller.go:1039] volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-64228 (uid: a089d844-311b-44f7-8fc2-e1a3557e49fa)", boundByController: true
I0904 20:34:32.726448       1 pv_controller.go:1040] claim "azuredisk-59/pvc-64228" status after binding: phase: Bound, bound to: "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa", bindCompleted: true, boundByController: true
I0904 20:34:32.737956       1 pv_controller.go:1341] isVolumeReleased[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is released
I0904 20:34:32.737979       1 pv_controller.go:1405] doDeleteVolume [pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]
I0904 20:34:32.738040       1 pv_controller.go:1260] deletion of volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) since it's in attaching or detaching state
I0904 20:34:32.738059       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: set phase Failed
I0904 20:34:32.738068       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase Failed already set
E0904 20:34:32.738121       1 goroutinemap.go:150] Operation for "delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]" failed. No retries permitted until 2022-09-04 20:34:33.738077453 +0000 UTC m=+1169.665452284 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) since it's in attaching or detaching state"
I0904 20:34:37.362662       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="152.501µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:36508" resp=200
I0904 20:34:37.732513       1 gc_controller.go:161] GC'ing orphaned
I0904 20:34:37.732557       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:34:40.151460       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 6 items received
I0904 20:34:46.110733       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) returned with <nil>
I0904 20:34:46.110786       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1) succeeded
... skipping 24 lines ...
I0904 20:34:47.727497       1 pv_controller.go:858] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: set phase Bound
I0904 20:34:47.727500       1 pv_controller.go:1038] volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" bound to claim "azuredisk-59/pvc-64228"
I0904 20:34:47.727506       1 pv_controller.go:861] updating PersistentVolume[pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa]: phase Bound already set
I0904 20:34:47.727519       1 pv_controller.go:1039] volume "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-64228 (uid: a089d844-311b-44f7-8fc2-e1a3557e49fa)", boundByController: true
I0904 20:34:47.727522       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" with version 3067
I0904 20:34:47.727535       1 pv_controller.go:1040] claim "azuredisk-59/pvc-64228" status after binding: phase: Bound, bound to: "pvc-a089d844-311b-44f7-8fc2-e1a3557e49fa", bindCompleted: true, boundByController: true
I0904 20:34:47.727541       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase: Failed, bound to: "azuredisk-59/pvc-8ddrz (uid: df53fbd7-d88a-4d21-b48d-69988d0201d1)", boundByController: true
I0904 20:34:47.727560       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is bound to claim azuredisk-59/pvc-8ddrz
I0904 20:34:47.727587       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: claim azuredisk-59/pvc-8ddrz not found
I0904 20:34:47.727596       1 pv_controller.go:1108] reclaimVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: policy is Delete
I0904 20:34:47.727615       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]]
I0904 20:34:47.727657       1 pv_controller.go:1232] deleteVolumeOperation [pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1] started
I0904 20:34:47.738268       1 pv_controller.go:1341] isVolumeReleased[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is released
... skipping 2 lines ...
I0904 20:34:53.811802       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1
I0904 20:34:53.811837       1 pv_controller.go:1436] volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" deleted
I0904 20:34:53.811851       1 pv_controller.go:1284] deleteVolumeOperation [pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: success
I0904 20:34:53.817981       1 pv_protection_controller.go:205] Got event on PV pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1
I0904 20:34:53.818026       1 pv_protection_controller.go:125] Processing PV pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1
I0904 20:34:53.818139       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" with version 3107
I0904 20:34:53.818167       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: phase: Failed, bound to: "azuredisk-59/pvc-8ddrz (uid: df53fbd7-d88a-4d21-b48d-69988d0201d1)", boundByController: true
I0904 20:34:53.818193       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: volume is bound to claim azuredisk-59/pvc-8ddrz
I0904 20:34:53.818211       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: claim azuredisk-59/pvc-8ddrz not found
I0904 20:34:53.818222       1 pv_controller.go:1108] reclaimVolume[pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1]: policy is Delete
I0904 20:34:53.818237       1 pv_controller.go:1753] scheduleOperation[delete-pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1[cee38c5d-2f78-4fd9-8f8a-1856f9740e01]]
I0904 20:34:53.818258       1 pv_controller.go:1232] deleteVolumeOperation [pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1] started
I0904 20:34:53.821895       1 pv_controller.go:1244] Volume "pvc-df53fbd7-d88a-4d21-b48d-69988d0201d1" is already being deleted
... skipping 510 lines ...
I0904 20:35:47.035189       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: claim azuredisk-2546/pvc-br577 not found
I0904 20:35:47.035340       1 pv_controller.go:1108] reclaimVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: policy is Delete
I0904 20:35:47.035538       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]]
I0904 20:35:47.035684       1 pv_controller.go:1764] operation "delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]" is already running, skipping
I0904 20:35:47.036615       1 pv_controller.go:1341] isVolumeReleased[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is released
I0904 20:35:47.036631       1 pv_controller.go:1405] doDeleteVolume [pvc-e5557e4e-e96a-4949-b559-ec237dd73264]
I0904 20:35:47.066614       1 pv_controller.go:1260] deletion of volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:35:47.066642       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: set phase Failed
I0904 20:35:47.066654       1 pv_controller.go:858] updating PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: set phase Failed
I0904 20:35:47.070107       1 pv_protection_controller.go:205] Got event on PV pvc-e5557e4e-e96a-4949-b559-ec237dd73264
I0904 20:35:47.070328       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" with version 3262
I0904 20:35:47.070666       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase: Failed, bound to: "azuredisk-2546/pvc-br577 (uid: e5557e4e-e96a-4949-b559-ec237dd73264)", boundByController: true
I0904 20:35:47.070752       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is bound to claim azuredisk-2546/pvc-br577
I0904 20:35:47.070773       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: claim azuredisk-2546/pvc-br577 not found
I0904 20:35:47.070782       1 pv_controller.go:1108] reclaimVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: policy is Delete
I0904 20:35:47.070799       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]]
I0904 20:35:47.070841       1 pv_controller.go:1764] operation "delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]" is already running, skipping
I0904 20:35:47.071570       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" with version 3262
I0904 20:35:47.071598       1 pv_controller.go:879] volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" entered phase "Failed"
I0904 20:35:47.071608       1 pv_controller.go:901] volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:35:47.071997       1 event.go:291] "Event occurred" object="pvc-e5557e4e-e96a-4949-b559-ec237dd73264" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
E0904 20:35:47.072234       1 goroutinemap.go:150] Operation for "delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]" failed. No retries permitted until 2022-09-04 20:35:47.57169844 +0000 UTC m=+1243.499073271 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:35:47.363375       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="64.401µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:59848" resp=200
I0904 20:35:47.617970       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:35:47.638437       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:35:47.729522       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:35:47.729619       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" with version 3262
I0904 20:35:47.729661       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase: Failed, bound to: "azuredisk-2546/pvc-br577 (uid: e5557e4e-e96a-4949-b559-ec237dd73264)", boundByController: true
I0904 20:35:47.729704       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is bound to claim azuredisk-2546/pvc-br577
I0904 20:35:47.729724       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: claim azuredisk-2546/pvc-br577 not found
I0904 20:35:47.729729       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-cdwch" with version 3165
I0904 20:35:47.729733       1 pv_controller.go:1108] reclaimVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: policy is Delete
I0904 20:35:47.729747       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-cdwch]: phase: Bound, bound to: "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a", bindCompleted: true, boundByController: true
I0904 20:35:47.729751       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]]
... skipping 18 lines ...
I0904 20:35:47.729929       1 pv_controller.go:1039] volume "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-cdwch (uid: 7bdf6c86-6f21-4250-9022-b6ef8eb9116a)", boundByController: true
I0904 20:35:47.729952       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-cdwch" status after binding: phase: Bound, bound to: "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a", bindCompleted: true, boundByController: true
I0904 20:35:47.729837       1 pv_controller.go:858] updating PersistentVolume[pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a]: set phase Bound
I0904 20:35:47.729965       1 pv_controller.go:861] updating PersistentVolume[pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a]: phase Bound already set
I0904 20:35:47.734317       1 pv_controller.go:1341] isVolumeReleased[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is released
I0904 20:35:47.734339       1 pv_controller.go:1405] doDeleteVolume [pvc-e5557e4e-e96a-4949-b559-ec237dd73264]
I0904 20:35:47.763894       1 pv_controller.go:1260] deletion of volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:35:47.763923       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: set phase Failed
I0904 20:35:47.763930       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase Failed already set
E0904 20:35:47.763962       1 goroutinemap.go:150] Operation for "delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]" failed. No retries permitted until 2022-09-04 20:35:48.763937948 +0000 UTC m=+1244.691312779 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:35:48.680641       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:35:49.085142       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-ocgw1u" (16.301µs)
I0904 20:35:50.004373       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:35:50.004406       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a to the node "capz-7sh698-mp-0000001" mounted false
I0904 20:35:50.004418       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264 to the node "capz-7sh698-mp-0000001" mounted false
I0904 20:35:50.102652       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a\"}]}}" for node "capz-7sh698-mp-0000001" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a 1}]
... skipping 42 lines ...
I0904 20:36:02.731015       1 pv_controller.go:1038] volume "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a" bound to claim "azuredisk-2546/pvc-cdwch"
I0904 20:36:02.731003       1 pv_controller.go:858] updating PersistentVolume[pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a]: set phase Bound
I0904 20:36:02.731036       1 pv_controller.go:1039] volume "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-cdwch (uid: 7bdf6c86-6f21-4250-9022-b6ef8eb9116a)", boundByController: true
I0904 20:36:02.731041       1 pv_controller.go:861] updating PersistentVolume[pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a]: phase Bound already set
I0904 20:36:02.731052       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-cdwch" status after binding: phase: Bound, bound to: "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a", bindCompleted: true, boundByController: true
I0904 20:36:02.731060       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" with version 3262
I0904 20:36:02.731366       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase: Failed, bound to: "azuredisk-2546/pvc-br577 (uid: e5557e4e-e96a-4949-b559-ec237dd73264)", boundByController: true
I0904 20:36:02.731437       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is bound to claim azuredisk-2546/pvc-br577
I0904 20:36:02.731466       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: claim azuredisk-2546/pvc-br577 not found
I0904 20:36:02.731508       1 pv_controller.go:1108] reclaimVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: policy is Delete
I0904 20:36:02.731529       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]]
I0904 20:36:02.731598       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e5557e4e-e96a-4949-b559-ec237dd73264] started
I0904 20:36:02.746902       1 pv_controller.go:1341] isVolumeReleased[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is released
I0904 20:36:02.746922       1 pv_controller.go:1405] doDeleteVolume [pvc-e5557e4e-e96a-4949-b559-ec237dd73264]
I0904 20:36:02.746962       1 pv_controller.go:1260] deletion of volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) since it's in attaching or detaching state
I0904 20:36:02.746978       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: set phase Failed
I0904 20:36:02.746988       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase Failed already set
E0904 20:36:02.747021       1 goroutinemap.go:150] Operation for "delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]" failed. No retries permitted until 2022-09-04 20:36:04.746997213 +0000 UTC m=+1260.674372044 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) since it's in attaching or detaching state"
I0904 20:36:05.327737       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0904 20:36:05.752404       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) returned with <nil>
I0904 20:36:05.752471       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264) succeeded
I0904 20:36:05.752488       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264 was detached from node:capz-7sh698-mp-0000001
I0904 20:36:05.752514       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264") on node "capz-7sh698-mp-0000001" 
I0904 20:36:05.752574       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-7sh698-mp-0000001, refreshing the cache
... skipping 21 lines ...
I0904 20:36:17.731467       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-2546/pvc-cdwch] status: set phase Bound
I0904 20:36:17.731531       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-cdwch] status: phase Bound already set
I0904 20:36:17.731586       1 pv_controller.go:1038] volume "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a" bound to claim "azuredisk-2546/pvc-cdwch"
I0904 20:36:17.731636       1 pv_controller.go:1039] volume "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-cdwch (uid: 7bdf6c86-6f21-4250-9022-b6ef8eb9116a)", boundByController: true
I0904 20:36:17.731694       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-cdwch" status after binding: phase: Bound, bound to: "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a", bindCompleted: true, boundByController: true
I0904 20:36:17.731721       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" with version 3262
I0904 20:36:17.731752       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase: Failed, bound to: "azuredisk-2546/pvc-br577 (uid: e5557e4e-e96a-4949-b559-ec237dd73264)", boundByController: true
I0904 20:36:17.731785       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is bound to claim azuredisk-2546/pvc-br577
I0904 20:36:17.731805       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: claim azuredisk-2546/pvc-br577 not found
I0904 20:36:17.731819       1 pv_controller.go:1108] reclaimVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: policy is Delete
I0904 20:36:17.731837       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]]
I0904 20:36:17.731863       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a" with version 3162
I0904 20:36:17.731882       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7bdf6c86-6f21-4250-9022-b6ef8eb9116a]: phase: Bound, bound to: "azuredisk-2546/pvc-cdwch (uid: 7bdf6c86-6f21-4250-9022-b6ef8eb9116a)", boundByController: true
... skipping 18 lines ...
I0904 20:36:22.560052       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 48 items received
I0904 20:36:22.910584       1 node_lifecycle_controller.go:1047] Node capz-7sh698-control-plane-sh4jc ReadyCondition updated. Updating timestamp.
I0904 20:36:23.004701       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-e5557e4e-e96a-4949-b559-ec237dd73264
I0904 20:36:23.004733       1 pv_controller.go:1436] volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" deleted
I0904 20:36:23.004745       1 pv_controller.go:1284] deleteVolumeOperation [pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: success
I0904 20:36:23.009253       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e5557e4e-e96a-4949-b559-ec237dd73264" with version 3320
I0904 20:36:23.009481       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: phase: Failed, bound to: "azuredisk-2546/pvc-br577 (uid: e5557e4e-e96a-4949-b559-ec237dd73264)", boundByController: true
I0904 20:36:23.009656       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: volume is bound to claim azuredisk-2546/pvc-br577
I0904 20:36:23.009678       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: claim azuredisk-2546/pvc-br577 not found
I0904 20:36:23.009738       1 pv_controller.go:1108] reclaimVolume[pvc-e5557e4e-e96a-4949-b559-ec237dd73264]: policy is Delete
I0904 20:36:23.009763       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e5557e4e-e96a-4949-b559-ec237dd73264[dce4be7e-a929-4711-9aab-94526eea1299]]
I0904 20:36:23.009807       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e5557e4e-e96a-4949-b559-ec237dd73264] started
I0904 20:36:23.009991       1 pv_protection_controller.go:205] Got event on PV pvc-e5557e4e-e96a-4949-b559-ec237dd73264
... skipping 292 lines ...
I0904 20:36:42.631764       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-kwgng] status: phase Bound already set
I0904 20:36:42.631795       1 pv_controller.go:1038] volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" bound to claim "azuredisk-8582/pvc-kwgng"
I0904 20:36:42.631842       1 pv_controller.go:1039] volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-kwgng (uid: ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788)", boundByController: true
I0904 20:36:42.631859       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-kwgng" status after binding: phase: Bound, bound to: "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788", bindCompleted: true, boundByController: true
I0904 20:36:42.774269       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2546
I0904 20:36:42.800246       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2546, name default-token-rg5qv, uid 9a3de692-c29f-4bc4-b6b3-ffc7cd365461, event type delete
E0904 20:36:42.813420       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2546/default: secrets "default-token-8z5qs" is forbidden: unable to create new content in namespace azuredisk-2546 because it is being terminated
I0904 20:36:42.834648       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-wdrfq.1711c29b9e1b5639, uid ea5516c4-0ba3-430d-9fe7-abd3abb0fa32, event type delete
I0904 20:36:42.837583       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-wdrfq.1711c29e28ee6b0d, uid 3ecdb21c-6acb-47f8-8b78-d834fcb29e26, event type delete
I0904 20:36:42.840067       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-wdrfq.1711c29f75ead032, uid c016f768-ba78-4ae9-9328-a257e8468af1, event type delete
I0904 20:36:42.844370       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-wdrfq.1711c29f75eb6f32, uid 9eee4bbf-188d-4e75-a1f6-7e916691b60e, event type delete
I0904 20:36:42.846601       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-wdrfq.1711c2a0a2732d42, uid 83ea1e80-a19b-4744-874f-38269c7e3712, event type delete
I0904 20:36:42.849611       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-wdrfq.1711c2a37705b253, uid 7a1290fc-1d03-4372-b609-a3035e41d0e2, event type delete
... skipping 10 lines ...
I0904 20:36:42.898675       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2546, name default, uid 100374eb-06cb-4cf5-a31c-c84e8dd8a933, event type delete
I0904 20:36:42.950934       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2546" (3.9µs)
I0904 20:36:42.952526       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2546, estimate: 0, errors: <nil>
I0904 20:36:42.963587       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2546" (192.311986ms)
I0904 20:36:43.417580       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1598
I0904 20:36:43.472143       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1598, name default-token-t9fk6, uid 5dbc5936-95ce-4cda-ac2d-0421a2eac5ef, event type delete
E0904 20:36:43.486832       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1598/default: secrets "default-token-c42n6" is forbidden: unable to create new content in namespace azuredisk-1598 because it is being terminated
I0904 20:36:43.529539       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1598/default), service account deleted, removing tokens
I0904 20:36:43.529836       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (2.8µs)
I0904 20:36:43.529872       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1598, name default, uid a1939fcc-763d-4e2a-aa08-d006494b2d41, event type delete
I0904 20:36:43.538737       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-7sh698-dynamic-pvc-fafbe749-fa6c-41e0-a660-d06520d375f0 StorageAccountType:Premium_LRS Size:10
I0904 20:36:43.554602       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1598, name kube-root-ca.crt, uid 1794efd5-4eb0-4d3f-943c-6332a57515db, event type delete
I0904 20:36:43.556475       1 publisher.go:181] Finished syncing namespace "azuredisk-1598" (2.068119ms)
... skipping 87 lines ...
I0904 20:36:44.017050       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-7sh698-dynamic-pvc-23ff5eea-f177-42da-8074-e5d724df16cc. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-23ff5eea-f177-42da-8074-e5d724df16cc" to node "capz-7sh698-mp-0000001".
I0904 20:36:44.017280       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-7sh698-dynamic-pvc-fafbe749-fa6c-41e0-a660-d06520d375f0. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-fafbe749-fa6c-41e0-a660-d06520d375f0" to node "capz-7sh698-mp-0000001".
I0904 20:36:44.070406       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-23ff5eea-f177-42da-8074-e5d724df16cc" lun 0 to node "capz-7sh698-mp-0000001".
I0904 20:36:44.070462       1 azure_controller_vmss.go:101] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - attach disk(capz-7sh698-dynamic-pvc-23ff5eea-f177-42da-8074-e5d724df16cc, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-23ff5eea-f177-42da-8074-e5d724df16cc) with DiskEncryptionSetID()
I0904 20:36:44.079056       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0904 20:36:44.116023       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-75m6r, uid c7f8a7f6-a3e1-48d5-8dfc-e0728f611cce, event type delete
E0904 20:36:44.127075       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-cqx2l" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0904 20:36:44.127871       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 7d18a826-4f3c-4c85-ab26-b65fe1836294, event type delete
I0904 20:36:44.129683       1 publisher.go:181] Finished syncing namespace "azuredisk-3410" (2.011419ms)
I0904 20:36:44.135944       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (3.8µs)
I0904 20:36:44.136133       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0904 20:36:44.136367       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid e56e1f1d-bf7d-4ff3-9b0e-ba21a267cc59, event type delete
I0904 20:36:44.220798       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (2.4µs)
... skipping 371 lines ...
I0904 20:37:20.828751       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: claim azuredisk-8582/pvc-kwgng not found
I0904 20:37:20.828908       1 pv_controller.go:1108] reclaimVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: policy is Delete
I0904 20:37:20.829051       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]]
I0904 20:37:20.829190       1 pv_controller.go:1764] operation "delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]" is already running, skipping
I0904 20:37:20.829839       1 pv_controller.go:1341] isVolumeReleased[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is released
I0904 20:37:20.830055       1 pv_controller.go:1405] doDeleteVolume [pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]
I0904 20:37:20.860774       1 pv_controller.go:1260] deletion of volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:37:20.860799       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: set phase Failed
I0904 20:37:20.860809       1 pv_controller.go:858] updating PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: set phase Failed
I0904 20:37:20.864387       1 pv_protection_controller.go:205] Got event on PV pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788
I0904 20:37:20.864515       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" with version 3515
I0904 20:37:20.864604       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase: Failed, bound to: "azuredisk-8582/pvc-kwgng (uid: ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788)", boundByController: true
I0904 20:37:20.864709       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is bound to claim azuredisk-8582/pvc-kwgng
I0904 20:37:20.864801       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: claim azuredisk-8582/pvc-kwgng not found
I0904 20:37:20.864901       1 pv_controller.go:1108] reclaimVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: policy is Delete
I0904 20:37:20.864999       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]]
I0904 20:37:20.865093       1 pv_controller.go:1764] operation "delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]" is already running, skipping
I0904 20:37:20.865966       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" with version 3515
I0904 20:37:20.865992       1 pv_controller.go:879] volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" entered phase "Failed"
I0904 20:37:20.866001       1 pv_controller.go:901] volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:37:20.866363       1 event.go:291] "Event occurred" object="pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
E0904 20:37:20.866456       1 goroutinemap.go:150] Operation for "delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]" failed. No retries permitted until 2022-09-04 20:37:21.366036744 +0000 UTC m=+1337.293411675 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:37:22.816570       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 2 items received
I0904 20:37:23.317342       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0904 20:37:27.362875       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="92.701µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48270" resp=200
I0904 20:37:29.582794       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 3 items received
I0904 20:37:30.119833       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-7sh698-mp-0000001"
I0904 20:37:30.121234       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-23ff5eea-f177-42da-8074-e5d724df16cc to the node "capz-7sh698-mp-0000001" mounted false
... skipping 65 lines ...
I0904 20:37:32.736564       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: volume is bound to claim azuredisk-8582/pvc-nkpw4
I0904 20:37:32.736621       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: claim azuredisk-8582/pvc-nkpw4 found: phase: Bound, bound to: "pvc-23ff5eea-f177-42da-8074-e5d724df16cc", bindCompleted: true, boundByController: true
I0904 20:37:32.736645       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: all is bound
I0904 20:37:32.736653       1 pv_controller.go:858] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: set phase Bound
I0904 20:37:32.736695       1 pv_controller.go:861] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: phase Bound already set
I0904 20:37:32.736725       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" with version 3515
I0904 20:37:32.736786       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase: Failed, bound to: "azuredisk-8582/pvc-kwgng (uid: ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788)", boundByController: true
I0904 20:37:32.736818       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is bound to claim azuredisk-8582/pvc-kwgng
I0904 20:37:32.736879       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: claim azuredisk-8582/pvc-kwgng not found
I0904 20:37:32.736896       1 pv_controller.go:1108] reclaimVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: policy is Delete
I0904 20:37:32.736915       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]]
I0904 20:37:32.736973       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fafbe749-fa6c-41e0-a660-d06520d375f0" with version 3426
I0904 20:37:32.737002       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fafbe749-fa6c-41e0-a660-d06520d375f0]: phase: Bound, bound to: "azuredisk-8582/pvc-ckjgg (uid: fafbe749-fa6c-41e0-a660-d06520d375f0)", boundByController: true
... skipping 2 lines ...
I0904 20:37:32.737099       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-fafbe749-fa6c-41e0-a660-d06520d375f0]: all is bound
I0904 20:37:32.737145       1 pv_controller.go:858] updating PersistentVolume[pvc-fafbe749-fa6c-41e0-a660-d06520d375f0]: set phase Bound
I0904 20:37:32.737164       1 pv_controller.go:861] updating PersistentVolume[pvc-fafbe749-fa6c-41e0-a660-d06520d375f0]: phase Bound already set
I0904 20:37:32.737190       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788] started
I0904 20:37:32.744480       1 pv_controller.go:1341] isVolumeReleased[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is released
I0904 20:37:32.744504       1 pv_controller.go:1405] doDeleteVolume [pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]
I0904 20:37:32.778650       1 pv_controller.go:1260] deletion of volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted
I0904 20:37:32.778684       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: set phase Failed
I0904 20:37:32.778696       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase Failed already set
E0904 20:37:32.778738       1 goroutinemap.go:150] Operation for "delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]" failed. No retries permitted until 2022-09-04 20:37:33.778706214 +0000 UTC m=+1349.706081045 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/virtualMachineScaleSets/capz-7sh698-mp-0/virtualMachines/capz-7sh698-mp-0_1), could not be deleted"
I0904 20:37:32.921577       1 node_lifecycle_controller.go:1047] Node capz-7sh698-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 20:37:36.330169       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0904 20:37:37.362229       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="75.8µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:39434" resp=200
I0904 20:37:37.739438       1 gc_controller.go:161] GC'ing orphaned
I0904 20:37:37.739480       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0904 20:37:39.591452       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 648 items received
... skipping 13 lines ...
I0904 20:37:47.736439       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: volume is bound to claim azuredisk-8582/pvc-nkpw4
I0904 20:37:47.736455       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: claim azuredisk-8582/pvc-nkpw4 found: phase: Bound, bound to: "pvc-23ff5eea-f177-42da-8074-e5d724df16cc", bindCompleted: true, boundByController: true
I0904 20:37:47.736469       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: all is bound
I0904 20:37:47.736479       1 pv_controller.go:858] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: set phase Bound
I0904 20:37:47.736487       1 pv_controller.go:861] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: phase Bound already set
I0904 20:37:47.736500       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" with version 3515
I0904 20:37:47.736518       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase: Failed, bound to: "azuredisk-8582/pvc-kwgng (uid: ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788)", boundByController: true
I0904 20:37:47.736539       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is bound to claim azuredisk-8582/pvc-kwgng
I0904 20:37:47.736557       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: claim azuredisk-8582/pvc-kwgng not found
I0904 20:37:47.736567       1 pv_controller.go:1108] reclaimVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: policy is Delete
I0904 20:37:47.736585       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]]
I0904 20:37:47.736601       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-fafbe749-fa6c-41e0-a660-d06520d375f0" with version 3426
I0904 20:37:47.736623       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-fafbe749-fa6c-41e0-a660-d06520d375f0]: phase: Bound, bound to: "azuredisk-8582/pvc-ckjgg (uid: fafbe749-fa6c-41e0-a660-d06520d375f0)", boundByController: true
... skipping 34 lines ...
I0904 20:37:47.737212       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-ckjgg] status: phase Bound already set
I0904 20:37:47.737222       1 pv_controller.go:1038] volume "pvc-fafbe749-fa6c-41e0-a660-d06520d375f0" bound to claim "azuredisk-8582/pvc-ckjgg"
I0904 20:37:47.737237       1 pv_controller.go:1039] volume "pvc-fafbe749-fa6c-41e0-a660-d06520d375f0" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-ckjgg (uid: fafbe749-fa6c-41e0-a660-d06520d375f0)", boundByController: true
I0904 20:37:47.737249       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-ckjgg" status after binding: phase: Bound, bound to: "pvc-fafbe749-fa6c-41e0-a660-d06520d375f0", bindCompleted: true, boundByController: true
I0904 20:37:47.744393       1 pv_controller.go:1341] isVolumeReleased[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is released
I0904 20:37:47.744414       1 pv_controller.go:1405] doDeleteVolume [pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]
I0904 20:37:47.744453       1 pv_controller.go:1260] deletion of volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) since it's in attaching or detaching state
I0904 20:37:47.744474       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: set phase Failed
I0904 20:37:47.744483       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase Failed already set
E0904 20:37:47.744519       1 goroutinemap.go:150] Operation for "delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]" failed. No retries permitted until 2022-09-04 20:37:49.744491541 +0000 UTC m=+1365.671866472 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) since it's in attaching or detaching state"
I0904 20:37:48.744132       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0904 20:37:51.581173       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 10 items received
I0904 20:37:54.953747       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 3 items received
I0904 20:37:55.891888       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) returned with <nil>
I0904 20:37:55.891961       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788) succeeded
I0904 20:37:55.891973       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788 was detached from node:capz-7sh698-mp-0000001
... skipping 25 lines ...
I0904 20:38:02.738668       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: volume is bound to claim azuredisk-8582/pvc-nkpw4
I0904 20:38:02.738766       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: claim azuredisk-8582/pvc-nkpw4 found: phase: Bound, bound to: "pvc-23ff5eea-f177-42da-8074-e5d724df16cc", bindCompleted: true, boundByController: true
I0904 20:38:02.738822       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: all is bound
I0904 20:38:02.738853       1 pv_controller.go:858] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: set phase Bound
I0904 20:38:02.738914       1 pv_controller.go:861] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: phase Bound already set
I0904 20:38:02.739003       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" with version 3515
I0904 20:38:02.739068       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase: Failed, bound to: "azuredisk-8582/pvc-kwgng (uid: ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788)", boundByController: true
I0904 20:38:02.739111       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is bound to claim azuredisk-8582/pvc-kwgng
I0904 20:38:02.738226       1 pv_controller.go:858] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: set phase Bound
I0904 20:38:02.739265       1 pv_controller.go:861] updating PersistentVolume[pvc-23ff5eea-f177-42da-8074-e5d724df16cc]: phase Bound already set
I0904 20:38:02.739283       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-8582/pvc-nkpw4]: binding to "pvc-23ff5eea-f177-42da-8074-e5d724df16cc"
I0904 20:38:02.739337       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8582/pvc-nkpw4]: already bound to "pvc-23ff5eea-f177-42da-8074-e5d724df16cc"
I0904 20:38:02.739374       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-nkpw4] status: set phase Bound
... skipping 28 lines ...
I0904 20:38:08.813510       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788
I0904 20:38:08.813563       1 pv_controller.go:1436] volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" deleted
I0904 20:38:08.813577       1 pv_controller.go:1284] deleteVolumeOperation [pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: success
I0904 20:38:08.824148       1 pv_protection_controller.go:205] Got event on PV pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788
I0904 20:38:08.824503       1 pv_protection_controller.go:125] Processing PV pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788
I0904 20:38:08.824187       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" with version 3588
I0904 20:38:08.824786       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: phase: Failed, bound to: "azuredisk-8582/pvc-kwgng (uid: ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788)", boundByController: true
I0904 20:38:08.824853       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: volume is bound to claim azuredisk-8582/pvc-kwgng
I0904 20:38:08.824914       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: claim azuredisk-8582/pvc-kwgng not found
I0904 20:38:08.824957       1 pv_controller.go:1108] reclaimVolume[pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788]: policy is Delete
I0904 20:38:08.825006       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788[7328ead0-3e87-4be1-a2d7-949842905dd0]]
I0904 20:38:08.825068       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788] started
I0904 20:38:08.829494       1 pv_controller.go:1244] Volume "pvc-ff43db3c-7bc7-4b28-bfcd-d3dc97f3d788" is already being deleted
... skipping 245 lines ...
I0904 20:38:37.117361       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (225.326022ms)
I0904 20:38:37.362538       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="86.201µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53736" resp=200
I0904 20:38:37.453675       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0904 20:38:37.469919       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid dfd79317-84a0-4e63-8066-0eeca71e8a30, event type delete
I0904 20:38:37.472379       1 publisher.go:181] Finished syncing namespace "azuredisk-7726" (2.479522ms)
I0904 20:38:37.510685       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-fxw7v, uid b3d07dc4-e37e-4a9e-8991-1d241c2160a0, event type delete
E0904 20:38:37.523216       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-tmwdl" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0904 20:38:37.551596       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0904 20:38:37.551788       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (3µs)
I0904 20:38:37.551862       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 0bb5a965-c78d-4ffb-8d74-2352cf6bf855, event type delete
I0904 20:38:37.599874       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (3.5µs)
I0904 20:38:37.600073       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
I0904 20:38:37.602183       1 controller.go:272] Triggering nodeSync
... skipping 78 lines ...
I0904 20:38:37.935195       1 pv_controller.go:1039] volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" status after binding: phase: Bound, bound to: "azuredisk-7051/pvc-hbwx9 (uid: 46244388-ff0c-4e3d-9e39-ff9bf82a535a)", boundByController: true
I0904 20:38:37.935279       1 pv_controller.go:1040] claim "azuredisk-7051/pvc-hbwx9" status after binding: phase: Bound, bound to: "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a", bindCompleted: true, boundByController: true
I0904 20:38:38.013010       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3086
I0904 20:38:38.050856       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-fxcmg, uid ad4be456-ba8d-4e33-a060-1bb529d15d3a, event type delete
I0904 20:38:38.059580       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid d60cdee4-9538-4018-a3c4-be80a83a93a0, event type delete
I0904 20:38:38.061159       1 publisher.go:181] Finished syncing namespace "azuredisk-3086" (1.779116ms)
E0904 20:38:38.066088       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-j5q4m" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0904 20:38:38.089982       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0904 20:38:38.090185       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2µs)
I0904 20:38:38.090223       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 4c198d3f-1021-42db-be46-60cc12f782df, event type delete
I0904 20:38:38.179772       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0904 20:38:38.180201       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.401µs)
I0904 20:38:38.197981       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (187.512482ms)
... skipping 266 lines ...
I0904 20:39:44.771038       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: claim azuredisk-7051/pvc-hbwx9 not found
I0904 20:39:44.771047       1 pv_controller.go:1108] reclaimVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: policy is Delete
I0904 20:39:44.771129       1 pv_controller.go:1753] scheduleOperation[delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]]
I0904 20:39:44.771143       1 pv_controller.go:1764] operation "delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]" is already running, skipping
I0904 20:39:44.772697       1 pv_controller.go:1341] isVolumeReleased[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: volume is released
I0904 20:39:44.772711       1 pv_controller.go:1405] doDeleteVolume [pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]
I0904 20:39:44.772778       1 pv_controller.go:1260] deletion of volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a) since it's in attaching or detaching state
I0904 20:39:44.772842       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: set phase Failed
I0904 20:39:44.772884       1 pv_controller.go:858] updating PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: set phase Failed
I0904 20:39:44.775412       1 pv_protection_controller.go:205] Got event on PV pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a
I0904 20:39:44.775566       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" with version 3845
I0904 20:39:44.775743       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: phase: Failed, bound to: "azuredisk-7051/pvc-hbwx9 (uid: 46244388-ff0c-4e3d-9e39-ff9bf82a535a)", boundByController: true
I0904 20:39:44.775879       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: volume is bound to claim azuredisk-7051/pvc-hbwx9
I0904 20:39:44.776006       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: claim azuredisk-7051/pvc-hbwx9 not found
I0904 20:39:44.776104       1 pv_controller.go:1108] reclaimVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: policy is Delete
I0904 20:39:44.776304       1 pv_controller.go:1753] scheduleOperation[delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]]
I0904 20:39:44.776403       1 pv_controller.go:1764] operation "delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]" is already running, skipping
I0904 20:39:44.776792       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" with version 3845
I0904 20:39:44.776940       1 pv_controller.go:879] volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" entered phase "Failed"
I0904 20:39:44.776957       1 pv_controller.go:901] volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a) since it's in attaching or detaching state
E0904 20:39:44.777002       1 goroutinemap.go:150] Operation for "delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]" failed. No retries permitted until 2022-09-04 20:39:45.276978393 +0000 UTC m=+1481.204353224 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a) since it's in attaching or detaching state"
I0904 20:39:44.777242       1 event.go:291] "Event occurred" object="pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a) since it's in attaching or detaching state"
I0904 20:39:45.338422       1 azure_controller_vmss.go:187] azureDisk - update(capz-7sh698): vm(capz-7sh698-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a) returned with <nil>
I0904 20:39:45.338489       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a) succeeded
I0904 20:39:45.338507       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a was detached from node:capz-7sh698-mp-0000000
I0904 20:39:45.338533       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a") on node "capz-7sh698-mp-0000000" 
I0904 20:39:47.363175       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="66.2µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46054" resp=200
I0904 20:39:47.622677       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:39:47.649364       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 20:39:47.741808       1 pv_controller_base.go:528] resyncing PV controller
I0904 20:39:47.741926       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" with version 3845
I0904 20:39:47.742005       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: phase: Failed, bound to: "azuredisk-7051/pvc-hbwx9 (uid: 46244388-ff0c-4e3d-9e39-ff9bf82a535a)", boundByController: true
I0904 20:39:47.742081       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: volume is bound to claim azuredisk-7051/pvc-hbwx9
I0904 20:39:47.742147       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: claim azuredisk-7051/pvc-hbwx9 not found
I0904 20:39:47.742185       1 pv_controller.go:1108] reclaimVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: policy is Delete
I0904 20:39:47.742217       1 pv_controller.go:1753] scheduleOperation[delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]]
I0904 20:39:47.742294       1 pv_controller.go:1232] deleteVolumeOperation [pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a] started
I0904 20:39:47.750057       1 pv_controller.go:1341] isVolumeReleased[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: volume is released
... skipping 2 lines ...
I0904 20:39:52.997905       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-7sh698/providers/Microsoft.Compute/disks/capz-7sh698-dynamic-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a
I0904 20:39:52.997943       1 pv_controller.go:1436] volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" deleted
I0904 20:39:52.997958       1 pv_controller.go:1284] deleteVolumeOperation [pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: success
I0904 20:39:53.009334       1 pv_protection_controller.go:205] Got event on PV pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a
I0904 20:39:53.009370       1 pv_protection_controller.go:125] Processing PV pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a
I0904 20:39:53.009841       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" with version 3859
I0904 20:39:53.009877       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: phase: Failed, bound to: "azuredisk-7051/pvc-hbwx9 (uid: 46244388-ff0c-4e3d-9e39-ff9bf82a535a)", boundByController: true
I0904 20:39:53.009906       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: volume is bound to claim azuredisk-7051/pvc-hbwx9
I0904 20:39:53.009923       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: claim azuredisk-7051/pvc-hbwx9 not found
I0904 20:39:53.009931       1 pv_controller.go:1108] reclaimVolume[pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a]: policy is Delete
I0904 20:39:53.009947       1 pv_controller.go:1753] scheduleOperation[delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]]
I0904 20:39:53.009954       1 pv_controller.go:1764] operation "delete-pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a[e28921b4-d765-4731-baff-90fb3f223402]" is already running, skipping
I0904 20:39:53.013611       1 pv_controller_base.go:235] volume "pvc-46244388-ff0c-4e3d-9e39-ff9bf82a535a" deleted
... skipping 639 lines ...
I0904 20:41:24.426958       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9183" (23.201µs)
2022/09/04 20:41:24 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1221.332 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-zsk7x, container manager
STEP: Dumping workload cluster default/capz-7sh698 logs
Sep  4 20:42:57.522: INFO: Collecting logs for Linux node capz-7sh698-control-plane-sh4jc in cluster capz-7sh698 in namespace default

Sep  4 20:43:57.523: INFO: Collecting boot logs for AzureMachine capz-7sh698-control-plane-sh4jc

Failed to get logs for machine capz-7sh698-control-plane-xgrh2, cluster default/capz-7sh698: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 20:43:58.220: INFO: Collecting logs for Linux node capz-7sh698-mp-0000000 in cluster capz-7sh698 in namespace default

Sep  4 20:44:58.221: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-7sh698-mp-0

Sep  4 20:44:58.504: INFO: Collecting logs for Linux node capz-7sh698-mp-0000001 in cluster capz-7sh698 in namespace default

Sep  4 20:45:58.506: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-7sh698-mp-0

Failed to get logs for machine pool capz-7sh698-mp-0, cluster default/capz-7sh698: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-7sh698 kube-system pod logs
STEP: Fetching kube-system pod logs took 433.170809ms
STEP: Dumping workload cluster default/capz-7sh698 Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-7sh698-control-plane-sh4jc, container etcd
STEP: Creating log watcher for controller kube-system/calico-node-bhcmx, container calico-node
STEP: Collecting events for Pod kube-system/kube-proxy-qwcht
STEP: Creating log watcher for controller kube-system/calico-node-k2798, container calico-node
STEP: Collecting events for Pod kube-system/kube-proxy-vhjv9
STEP: Collecting events for Pod kube-system/kube-proxy-lbtt8
STEP: Collecting events for Pod kube-system/etcd-capz-7sh698-control-plane-sh4jc
STEP: failed to find events of Pod "etcd-capz-7sh698-control-plane-sh4jc"
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-7sh698-control-plane-sh4jc, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-7sh698-control-plane-sh4jc
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-7sh698-control-plane-sh4jc, container kube-controller-manager
STEP: failed to find events of Pod "kube-apiserver-capz-7sh698-control-plane-sh4jc"
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-7sh698-control-plane-sh4jc
STEP: failed to find events of Pod "kube-controller-manager-capz-7sh698-control-plane-sh4jc"
STEP: Collecting events for Pod kube-system/calico-node-bhcmx
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-fc888, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-fc888
STEP: Creating log watcher for controller kube-system/calico-node-85m8v, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-85m8v
STEP: Creating log watcher for controller kube-system/kube-proxy-qwcht, container kube-proxy
... skipping 3 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-7sh698-control-plane-sh4jc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-9q77m, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-dldjg
STEP: Creating log watcher for controller kube-system/kube-proxy-lbtt8, container kube-proxy
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-dldjg, container coredns
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-7sh698-control-plane-sh4jc
STEP: failed to find events of Pod "kube-scheduler-capz-7sh698-control-plane-sh4jc"
STEP: Fetching activity logs took 3.579741394s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-7sh698" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...