This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-07 04:42
Elapsed48m45s
Revision
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 628 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 197 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 04:56:54.381: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hlmkj" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  7 04:56:54.412: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.578017ms
Sep  7 04:56:56.448: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066969997s
Sep  7 04:56:58.481: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099966436s
Sep  7 04:57:00.516: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13423559s
Sep  7 04:57:02.549: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167839444s
Sep  7 04:57:04.582: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200983263s
... skipping 10 lines ...
Sep  7 04:57:26.952: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 32.570679673s
Sep  7 04:57:28.985: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.603495664s
Sep  7 04:57:31.021: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 36.639470729s
Sep  7 04:57:33.056: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Pending", Reason="", readiness=false. Elapsed: 38.674116185s
Sep  7 04:57:35.090: INFO: Pod "azuredisk-volume-tester-hlmkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.708701036s
STEP: Saw pod success
Sep  7 04:57:35.090: INFO: Pod "azuredisk-volume-tester-hlmkj" satisfied condition "Succeeded or Failed"
Sep  7 04:57:35.090: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-hlmkj"
Sep  7 04:57:35.147: INFO: Pod azuredisk-volume-tester-hlmkj has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-hlmkj in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 04:57:35.272: INFO: deleting PVC "azuredisk-8081"/"pvc-bflhk"
Sep  7 04:57:35.272: INFO: Deleting PersistentVolumeClaim "pvc-bflhk"
STEP: waiting for claim's PV "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" to be deleted
Sep  7 04:57:35.306: INFO: Waiting up to 10m0s for PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 to get deleted
Sep  7 04:57:35.348: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Released (41.739449ms)
Sep  7 04:57:40.383: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Failed (5.076806177s)
Sep  7 04:57:45.420: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Failed (10.113502174s)
Sep  7 04:57:50.452: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Failed (15.145513327s)
Sep  7 04:57:55.487: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Failed (20.180660988s)
Sep  7 04:58:00.525: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Failed (25.2180948s)
Sep  7 04:58:05.560: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 found and phase=Failed (30.253499017s)
Sep  7 04:58:10.596: INFO: PersistentVolume pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 was removed
Sep  7 04:58:10.596: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  7 04:58:10.627: INFO: Claim "azuredisk-8081" in namespace "pvc-bflhk" doesn't exist in the system
Sep  7 04:58:10.627: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-jfjm2
Sep  7 04:58:10.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  7 04:58:50.705: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-gs645"
Sep  7 04:58:50.755: INFO: Error getting logs for pod azuredisk-volume-tester-gs645: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-gs645)
STEP: Deleting pod azuredisk-volume-tester-gs645 in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 04:58:50.856: INFO: deleting PVC "azuredisk-5466"/"pvc-smz89"
Sep  7 04:58:50.856: INFO: Deleting PersistentVolumeClaim "pvc-smz89"
STEP: waiting for claim's PV "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" to be deleted
Sep  7 04:58:50.888: INFO: Waiting up to 10m0s for PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 to get deleted
Sep  7 04:58:50.919: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Bound (30.407173ms)
Sep  7 04:58:55.948: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Failed (5.060116741s)
Sep  7 04:59:00.981: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Failed (10.092795858s)
Sep  7 04:59:06.014: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Failed (15.125725237s)
Sep  7 04:59:11.045: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Failed (20.156633943s)
Sep  7 04:59:16.074: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Failed (25.186241988s)
Sep  7 04:59:21.107: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 found and phase=Failed (30.219112061s)
Sep  7 04:59:26.140: INFO: PersistentVolume pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1 was removed
Sep  7 04:59:26.140: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  7 04:59:26.169: INFO: Claim "azuredisk-5466" in namespace "pvc-smz89" doesn't exist in the system
Sep  7 04:59:26.169: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-6s9xg
Sep  7 04:59:26.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 04:59:27.223: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g9rlw" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  7 04:59:27.251: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 27.834359ms
Sep  7 04:59:29.280: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057225449s
Sep  7 04:59:31.309: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.086023884s
Sep  7 04:59:33.340: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116291521s
Sep  7 04:59:35.369: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.145570358s
Sep  7 04:59:37.397: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.173968124s
... skipping 2 lines ...
Sep  7 04:59:43.490: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.266846124s
Sep  7 04:59:45.520: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.296605164s
Sep  7 04:59:47.550: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.326589516s
Sep  7 04:59:49.581: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Pending", Reason="", readiness=false. Elapsed: 22.357901283s
Sep  7 04:59:51.613: INFO: Pod "azuredisk-volume-tester-g9rlw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.389304936s
STEP: Saw pod success
Sep  7 04:59:51.613: INFO: Pod "azuredisk-volume-tester-g9rlw" satisfied condition "Succeeded or Failed"
Sep  7 04:59:51.613: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-g9rlw"
Sep  7 04:59:51.644: INFO: Pod azuredisk-volume-tester-g9rlw has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-g9rlw in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 04:59:51.740: INFO: deleting PVC "azuredisk-2790"/"pvc-nhr4g"
Sep  7 04:59:51.740: INFO: Deleting PersistentVolumeClaim "pvc-nhr4g"
STEP: waiting for claim's PV "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" to be deleted
Sep  7 04:59:51.770: INFO: Waiting up to 10m0s for PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 to get deleted
Sep  7 04:59:51.797: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Released (27.481048ms)
Sep  7 04:59:56.832: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Failed (5.062072632s)
Sep  7 05:00:01.869: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Failed (10.09863604s)
Sep  7 05:00:06.902: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Failed (15.131862226s)
Sep  7 05:00:11.933: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Failed (20.162847407s)
Sep  7 05:00:16.963: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Failed (25.192683224s)
Sep  7 05:00:21.997: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 found and phase=Failed (30.226580065s)
Sep  7 05:00:27.029: INFO: PersistentVolume pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 was removed
Sep  7 05:00:27.029: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  7 05:00:27.057: INFO: Claim "azuredisk-2790" in namespace "pvc-nhr4g" doesn't exist in the system
Sep  7 05:00:27.057: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-2pczq
Sep  7 05:00:27.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  7 05:00:27.824: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-x5t6r" in namespace "azuredisk-5356" to be "Error status code"
Sep  7 05:00:27.855: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 31.31863ms
Sep  7 05:00:29.887: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063038382s
Sep  7 05:00:31.917: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093412452s
Sep  7 05:00:33.946: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122711842s
Sep  7 05:00:35.976: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152804138s
Sep  7 05:00:38.006: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182732801s
Sep  7 05:00:40.037: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 12.213563785s
Sep  7 05:00:42.068: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 14.24479322s
Sep  7 05:00:44.099: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 16.275532777s
Sep  7 05:00:46.129: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 18.305346697s
Sep  7 05:00:48.159: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Pending", Reason="", readiness=false. Elapsed: 20.335740394s
Sep  7 05:00:50.190: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Running", Reason="", readiness=true. Elapsed: 22.366884612s
Sep  7 05:00:52.220: INFO: Pod "azuredisk-volume-tester-x5t6r": Phase="Failed", Reason="", readiness=false. Elapsed: 24.396781885s
STEP: Saw pod failure
Sep  7 05:00:52.220: INFO: Pod "azuredisk-volume-tester-x5t6r" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 05:00:52.252: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-x5t6r"
Sep  7 05:00:52.283: INFO: Pod azuredisk-volume-tester-x5t6r has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-x5t6r in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 05:00:52.377: INFO: deleting PVC "azuredisk-5356"/"pvc-vncjg"
Sep  7 05:00:52.377: INFO: Deleting PersistentVolumeClaim "pvc-vncjg"
STEP: waiting for claim's PV "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" to be deleted
Sep  7 05:00:52.408: INFO: Waiting up to 10m0s for PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 to get deleted
Sep  7 05:00:52.436: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Released (28.361234ms)
Sep  7 05:00:57.469: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Failed (5.060932595s)
Sep  7 05:01:02.501: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Failed (10.092992646s)
Sep  7 05:01:07.537: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Failed (15.12918891s)
Sep  7 05:01:12.569: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Failed (20.161044712s)
Sep  7 05:01:17.601: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Failed (25.193430367s)
Sep  7 05:01:22.631: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 found and phase=Failed (30.222953345s)
Sep  7 05:01:27.662: INFO: PersistentVolume pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 was removed
Sep  7 05:01:27.662: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  7 05:01:27.690: INFO: Claim "azuredisk-5356" in namespace "pvc-vncjg" doesn't exist in the system
Sep  7 05:01:27.690: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-f6gxl
Sep  7 05:01:27.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 55 lines ...
Sep  7 05:02:54.144: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Bound (15.120545295s)
Sep  7 05:02:59.176: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Bound (20.152879891s)
Sep  7 05:03:04.208: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Bound (25.184052687s)
Sep  7 05:03:09.236: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Bound (30.21226808s)
Sep  7 05:03:14.268: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Bound (35.244440693s)
Sep  7 05:03:19.301: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Bound (40.277396811s)
Sep  7 05:03:24.329: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Failed (45.305847816s)
Sep  7 05:03:29.360: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Failed (50.336597619s)
Sep  7 05:03:34.392: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Failed (55.368199966s)
Sep  7 05:03:39.424: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Failed (1m0.400318869s)
Sep  7 05:03:44.452: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Failed (1m5.428849376s)
Sep  7 05:03:49.482: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc found and phase=Failed (1m10.458059403s)
Sep  7 05:03:54.512: INFO: PersistentVolume pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc was removed
Sep  7 05:03:54.512: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 05:03:54.540: INFO: Claim "azuredisk-5194" in namespace "pvc-tjdsd" doesn't exist in the system
Sep  7 05:03:54.540: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-9sw2p
Sep  7 05:03:54.571: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-ps7j4"
Sep  7 05:03:54.620: INFO: Pod azuredisk-volume-tester-ps7j4 has the following logs: 
... skipping 8 lines ...
Sep  7 05:03:59.797: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Bound (5.05981653s)
Sep  7 05:04:04.826: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Bound (10.088199479s)
Sep  7 05:04:09.856: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Bound (15.118483786s)
Sep  7 05:04:14.889: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Bound (20.151186778s)
Sep  7 05:04:19.919: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Bound (25.181186841s)
Sep  7 05:04:24.948: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Bound (30.210585873s)
Sep  7 05:04:29.978: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (35.24083753s)
Sep  7 05:04:35.010: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (40.27304161s)
Sep  7 05:04:40.043: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (45.305813589s)
Sep  7 05:04:45.075: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (50.337598979s)
Sep  7 05:04:50.103: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (55.366025228s)
Sep  7 05:04:55.136: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (1m0.398154521s)
Sep  7 05:05:00.178: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (1m5.440430539s)
Sep  7 05:05:05.218: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e found and phase=Failed (1m10.480657915s)
Sep  7 05:05:10.256: INFO: PersistentVolume pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e was removed
Sep  7 05:05:10.256: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 05:05:10.293: INFO: Claim "azuredisk-5194" in namespace "pvc-24zwq" doesn't exist in the system
Sep  7 05:05:10.293: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-n6rfr
Sep  7 05:05:10.331: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-2kwhv"
Sep  7 05:05:10.385: INFO: Pod azuredisk-volume-tester-2kwhv has the following logs: 
... skipping 10 lines ...
Sep  7 05:05:25.739: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Bound (15.156073763s)
Sep  7 05:05:30.780: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Bound (20.197160122s)
Sep  7 05:05:35.822: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Bound (25.238871343s)
Sep  7 05:05:40.864: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Bound (30.2807126s)
Sep  7 05:05:45.905: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Bound (35.321609504s)
Sep  7 05:05:50.942: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Bound (40.359220852s)
Sep  7 05:05:55.975: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Failed (45.392346752s)
Sep  7 05:06:01.008: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Failed (50.424918992s)
Sep  7 05:06:06.037: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 found and phase=Failed (55.453852789s)
Sep  7 05:06:11.070: INFO: PersistentVolume pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 was removed
Sep  7 05:06:11.070: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 05:06:11.098: INFO: Claim "azuredisk-5194" in namespace "pvc-6g8f7" doesn't exist in the system
Sep  7 05:06:11.098: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-svv2l
Sep  7 05:06:11.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 63 lines ...
Sep  7 05:09:07.615: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Bound (10.090433042s)
Sep  7 05:09:12.646: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Bound (15.122333879s)
Sep  7 05:09:17.679: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Bound (20.154816622s)
Sep  7 05:09:22.711: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Bound (25.186809116s)
Sep  7 05:09:27.743: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Bound (30.219194343s)
Sep  7 05:09:32.772: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Bound (35.247926s)
Sep  7 05:09:37.801: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Failed (40.277036992s)
Sep  7 05:09:42.835: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Failed (45.310445667s)
Sep  7 05:09:47.863: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Failed (50.338994314s)
Sep  7 05:09:52.892: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 found and phase=Failed (55.368051081s)
Sep  7 05:09:57.921: INFO: PersistentVolume pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 was removed
Sep  7 05:09:57.921: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  7 05:09:57.949: INFO: Claim "azuredisk-1353" in namespace "pvc-4rm84" doesn't exist in the system
Sep  7 05:09:57.949: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-bhpxt
Sep  7 05:09:57.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 05:10:15.094: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-x5gsj" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  7 05:10:15.124: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.138899ms
Sep  7 05:10:17.153: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059911578s
Sep  7 05:10:19.185: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091532876s
Sep  7 05:10:21.216: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122030894s
Sep  7 05:10:23.246: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152613986s
Sep  7 05:10:25.276: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182556144s
... skipping 25 lines ...
Sep  7 05:11:18.088: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.994258517s
Sep  7 05:11:20.119: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.025569199s
Sep  7 05:11:22.150: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.056400841s
Sep  7 05:11:24.192: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.098547828s
Sep  7 05:11:26.223: INFO: Pod "azuredisk-volume-tester-x5gsj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m11.129446108s
STEP: Saw pod success
Sep  7 05:11:26.223: INFO: Pod "azuredisk-volume-tester-x5gsj" satisfied condition "Succeeded or Failed"
Sep  7 05:11:26.223: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-x5gsj"
Sep  7 05:11:26.270: INFO: Pod azuredisk-volume-tester-x5gsj has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-x5gsj in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 05:11:26.367: INFO: deleting PVC "azuredisk-59"/"pvc-2z5d9"
Sep  7 05:11:26.367: INFO: Deleting PersistentVolumeClaim "pvc-2z5d9"
STEP: waiting for claim's PV "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" to be deleted
Sep  7 05:11:26.397: INFO: Waiting up to 10m0s for PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 to get deleted
Sep  7 05:11:26.425: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Released (27.820794ms)
Sep  7 05:11:31.456: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (5.059213849s)
Sep  7 05:11:36.489: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (10.092004071s)
Sep  7 05:11:41.521: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (15.124028353s)
Sep  7 05:11:46.550: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (20.153021895s)
Sep  7 05:11:51.583: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (25.185639935s)
Sep  7 05:11:56.612: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (30.21443594s)
Sep  7 05:12:01.642: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (35.24501324s)
Sep  7 05:12:06.673: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (40.276194163s)
Sep  7 05:12:11.705: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (45.307680796s)
Sep  7 05:12:16.734: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (50.336455357s)
Sep  7 05:12:21.767: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (55.369702146s)
Sep  7 05:12:26.800: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (1m0.402831311s)
Sep  7 05:12:31.834: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (1m5.436862904s)
Sep  7 05:12:36.867: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 found and phase=Failed (1m10.469744471s)
Sep  7 05:12:41.898: INFO: PersistentVolume pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 was removed
Sep  7 05:12:41.898: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  7 05:12:41.926: INFO: Claim "azuredisk-59" in namespace "pvc-2z5d9" doesn't exist in the system
Sep  7 05:12:41.926: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-g9hxf
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 05:13:03.119: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vrd46" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  7 05:13:03.147: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 27.372027ms
Sep  7 05:13:05.179: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059889538s
Sep  7 05:13:07.209: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089814578s
Sep  7 05:13:09.241: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121358639s
Sep  7 05:13:11.270: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151138305s
Sep  7 05:13:13.301: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 10.181930357s
... skipping 9 lines ...
Sep  7 05:13:33.626: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 30.506782406s
Sep  7 05:13:35.656: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 32.537170309s
Sep  7 05:13:37.686: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 34.566614701s
Sep  7 05:13:39.718: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Pending", Reason="", readiness=false. Elapsed: 36.598330063s
Sep  7 05:13:41.748: INFO: Pod "azuredisk-volume-tester-vrd46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.628680715s
STEP: Saw pod success
Sep  7 05:13:41.748: INFO: Pod "azuredisk-volume-tester-vrd46" satisfied condition "Succeeded or Failed"
Sep  7 05:13:41.748: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-vrd46"
Sep  7 05:13:41.791: INFO: Pod azuredisk-volume-tester-vrd46 has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.077748 seconds, 1.3GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  7 05:13:41.882: INFO: deleting PVC "azuredisk-2546"/"pvc-glgk8"
Sep  7 05:13:41.882: INFO: Deleting PersistentVolumeClaim "pvc-glgk8"
STEP: waiting for claim's PV "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" to be deleted
Sep  7 05:13:41.914: INFO: Waiting up to 10m0s for PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a to get deleted
Sep  7 05:13:41.941: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Released (27.241222ms)
Sep  7 05:13:46.970: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (5.056204966s)
Sep  7 05:13:52.001: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (10.087792386s)
Sep  7 05:13:57.033: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (15.119550208s)
Sep  7 05:14:02.063: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (20.149599646s)
Sep  7 05:14:07.093: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (25.179434827s)
Sep  7 05:14:12.122: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (30.207940976s)
Sep  7 05:14:17.152: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (35.238848269s)
Sep  7 05:14:22.187: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (40.273253575s)
Sep  7 05:14:27.219: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (45.305813149s)
Sep  7 05:14:32.250: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (50.336152159s)
Sep  7 05:14:37.279: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a found and phase=Failed (55.365160763s)
Sep  7 05:14:42.307: INFO: PersistentVolume pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a was removed
Sep  7 05:14:42.307: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  7 05:14:42.335: INFO: Claim "azuredisk-2546" in namespace "pvc-glgk8" doesn't exist in the system
Sep  7 05:14:42.335: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-tcbgw
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 05:14:54.347: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2xc9q" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  7 05:14:54.376: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 28.825992ms
Sep  7 05:14:56.405: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058139826s
Sep  7 05:14:58.435: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087975466s
Sep  7 05:15:00.465: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118085535s
Sep  7 05:15:02.496: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148896671s
Sep  7 05:15:04.527: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179439231s
... skipping 25 lines ...
Sep  7 05:15:57.340: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.992943233s
Sep  7 05:15:59.372: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.024856125s
Sep  7 05:16:01.403: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.05595709s
Sep  7 05:16:03.434: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.086503643s
Sep  7 05:16:05.465: INFO: Pod "azuredisk-volume-tester-2xc9q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m11.117261978s
STEP: Saw pod success
Sep  7 05:16:05.465: INFO: Pod "azuredisk-volume-tester-2xc9q" satisfied condition "Succeeded or Failed"
Sep  7 05:16:05.465: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-2xc9q"
Sep  7 05:16:05.514: INFO: Pod azuredisk-volume-tester-2xc9q has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-2xc9q in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 05:16:05.608: INFO: deleting PVC "azuredisk-8582"/"pvc-twnlj"
Sep  7 05:16:05.608: INFO: Deleting PersistentVolumeClaim "pvc-twnlj"
STEP: waiting for claim's PV "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" to be deleted
Sep  7 05:16:05.641: INFO: Waiting up to 10m0s for PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 to get deleted
Sep  7 05:16:05.670: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Released (28.690067ms)
Sep  7 05:16:10.704: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Failed (5.062967215s)
Sep  7 05:16:15.737: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Failed (10.095680589s)
Sep  7 05:16:20.769: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Failed (15.128284378s)
Sep  7 05:16:25.801: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Failed (20.160089347s)
Sep  7 05:16:30.831: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Failed (25.189878798s)
Sep  7 05:16:35.867: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 found and phase=Failed (30.226027831s)
Sep  7 05:16:40.897: INFO: PersistentVolume pvc-eb586b3f-db94-4720-9fc5-7547ae944b83 was removed
Sep  7 05:16:40.897: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 05:16:40.926: INFO: Claim "azuredisk-8582" in namespace "pvc-twnlj" doesn't exist in the system
Sep  7 05:16:40.926: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-4xjvs
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 05:16:41.015: INFO: deleting PVC "azuredisk-8582"/"pvc-jptdm"
Sep  7 05:16:41.015: INFO: Deleting PersistentVolumeClaim "pvc-jptdm"
STEP: waiting for claim's PV "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" to be deleted
Sep  7 05:16:41.048: INFO: Waiting up to 10m0s for PersistentVolume pvc-968e561b-7ea0-48c2-b400-c7a797623bec to get deleted
Sep  7 05:16:41.080: INFO: PersistentVolume pvc-968e561b-7ea0-48c2-b400-c7a797623bec found and phase=Failed (32.60403ms)
Sep  7 05:16:46.113: INFO: PersistentVolume pvc-968e561b-7ea0-48c2-b400-c7a797623bec found and phase=Failed (5.065444149s)
Sep  7 05:16:51.150: INFO: PersistentVolume pvc-968e561b-7ea0-48c2-b400-c7a797623bec found and phase=Failed (10.102141191s)
Sep  7 05:16:56.181: INFO: PersistentVolume pvc-968e561b-7ea0-48c2-b400-c7a797623bec was removed
Sep  7 05:16:56.181: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 05:16:56.208: INFO: Claim "azuredisk-8582" in namespace "pvc-jptdm" doesn't exist in the system
Sep  7 05:16:56.208: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-gf4vd
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 05:16:56.294: INFO: deleting PVC "azuredisk-8582"/"pvc-6wqhl"
Sep  7 05:16:56.294: INFO: Deleting PersistentVolumeClaim "pvc-6wqhl"
STEP: waiting for claim's PV "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" to be deleted
Sep  7 05:16:56.325: INFO: Waiting up to 10m0s for PersistentVolume pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 to get deleted
Sep  7 05:16:56.358: INFO: PersistentVolume pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 found and phase=Failed (32.688249ms)
Sep  7 05:17:01.388: INFO: PersistentVolume pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 found and phase=Failed (5.062631223s)
Sep  7 05:17:06.417: INFO: PersistentVolume pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 found and phase=Failed (10.091247367s)
Sep  7 05:17:11.446: INFO: PersistentVolume pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 found and phase=Failed (15.120749165s)
Sep  7 05:17:16.478: INFO: PersistentVolume pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 was removed
Sep  7 05:17:16.478: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 05:17:16.506: INFO: Claim "azuredisk-8582" in namespace "pvc-6wqhl" doesn't exist in the system
Sep  7 05:17:16.506: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-vzsg6
Sep  7 05:17:16.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8582" for this suite.
... skipping 150 lines ...
STEP: checking the PV
Sep  7 05:18:29.409: INFO: deleting PVC "azuredisk-7051"/"pvc-f66gr"
Sep  7 05:18:29.409: INFO: Deleting PersistentVolumeClaim "pvc-f66gr"
STEP: waiting for claim's PV "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" to be deleted
Sep  7 05:18:29.440: INFO: Waiting up to 10m0s for PersistentVolume pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8 to get deleted
Sep  7 05:18:29.471: INFO: PersistentVolume pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8 found and phase=Released (31.299533ms)
Sep  7 05:18:34.500: INFO: PersistentVolume pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8 found and phase=Failed (5.060380031s)
Sep  7 05:18:39.533: INFO: PersistentVolume pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8 found and phase=Failed (10.09299354s)
Sep  7 05:18:44.565: INFO: PersistentVolume pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8 was removed
Sep  7 05:18:44.565: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7051 to be removed
Sep  7 05:18:44.593: INFO: Claim "azuredisk-7051" in namespace "pvc-f66gr" doesn't exist in the system
Sep  7 05:18:44.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7051" for this suite.

... skipping 218 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  7 05:20:31.234: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.268 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 248 lines ...
I0907 04:52:22.613768       1 tlsconfig.go:178] loaded client CA [1/"client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt"]: "kubernetes" [] issuer="<self>" (2022-09-07 04:45:00 +0000 UTC to 2032-09-04 04:50:00 +0000 UTC (now=2022-09-07 04:52:22.613756568 +0000 UTC))
I0907 04:52:22.614062       1 tlsconfig.go:200] loaded serving cert ["Generated self signed cert"]: "localhost@1662526341" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer="localhost-ca@1662526340" (2022-09-07 03:52:20 +0000 UTC to 2023-09-07 03:52:20 +0000 UTC (now=2022-09-07 04:52:22.614046493 +0000 UTC))
I0907 04:52:22.614328       1 named_certificates.go:53] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1662526342" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1662526341" (2022-09-07 03:52:21 +0000 UTC to 2023-09-07 03:52:21 +0000 UTC (now=2022-09-07 04:52:22.614313317 +0000 UTC))
I0907 04:52:22.614386       1 secure_serving.go:202] Serving securely on 127.0.0.1:10257
I0907 04:52:22.614442       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0907 04:52:22.614961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/kube-controller-manager...
E0907 04:52:25.481291       1 leaderelection.go:325] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0907 04:52:25.481506       1 leaderelection.go:248] failed to acquire lease kube-system/kube-controller-manager
I0907 04:52:29.635146       1 leaderelection.go:253] successfully acquired lease kube-system/kube-controller-manager
I0907 04:52:29.636405       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-m4haq0-control-plane-z4nt5_82434a4e-1e09-47b9-8c79-a9f6df9db33c became leader"
I0907 04:52:29.845793       1 request.go:600] Waited for 97.086171ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/discovery.k8s.io/v1beta1?timeout=32s
I0907 04:52:29.894868       1 request.go:600] Waited for 146.150045ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/authorization.k8s.io/v1beta1?timeout=32s
I0907 04:52:29.945663       1 request.go:600] Waited for 196.889346ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/autoscaling/v1?timeout=32s
I0907 04:52:29.995576       1 request.go:600] Waited for 246.831786ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/batch/v1beta1?timeout=32s
... skipping 39 lines ...
I0907 04:52:30.304359       1 reflector.go:219] Starting reflector *v1.Secret (23h51m21.209962167s) from k8s.io/client-go/informers/factory.go:134
I0907 04:52:30.304549       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0907 04:52:30.305055       1 reflector.go:219] Starting reflector *v1.Node (23h51m21.209962167s) from k8s.io/client-go/informers/factory.go:134
I0907 04:52:30.305234       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0907 04:52:30.306517       1 reflector.go:219] Starting reflector *v1.ServiceAccount (23h51m21.209962167s) from k8s.io/client-go/informers/factory.go:134
I0907 04:52:30.307244       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0907 04:52:30.335007       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0907 04:52:30.335040       1 controllermanager.go:559] Starting "statefulset"
I0907 04:52:30.342096       1 controllermanager.go:574] Started "statefulset"
I0907 04:52:30.342402       1 controllermanager.go:559] Starting "service"
I0907 04:52:30.342122       1 stateful_set.go:146] Starting stateful set controller
I0907 04:52:30.342539       1 shared_informer.go:240] Waiting for caches to sync for stateful set
I0907 04:52:30.349057       1 controllermanager.go:574] Started "service"
... skipping 117 lines ...
I0907 04:52:32.310290       1 plugins.go:639] Loaded volume plugin "kubernetes.io/azure-file"
I0907 04:52:32.310303       1 plugins.go:639] Loaded volume plugin "kubernetes.io/flocker"
I0907 04:52:32.310332       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0907 04:52:32.310353       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0907 04:52:32.310370       1 plugins.go:639] Loaded volume plugin "kubernetes.io/local-volume"
I0907 04:52:32.310383       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0907 04:52:32.310429       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 04:52:32.310439       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0907 04:52:32.310527       1 controllermanager.go:574] Started "persistentvolume-binder"
I0907 04:52:32.310546       1 controllermanager.go:559] Starting "resourcequota"
I0907 04:52:32.310594       1 pv_controller_base.go:308] Starting persistent volume controller
I0907 04:52:32.310605       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0907 04:52:32.626786       1 resource_quota_monitor.go:177] QuotaMonitor using a shared informer for resource "apps/v1, Resource=replicasets"
... skipping 98 lines ...
I0907 04:52:33.409828       1 plugins.go:639] Loaded volume plugin "kubernetes.io/portworx-volume"
I0907 04:52:33.409892       1 plugins.go:639] Loaded volume plugin "kubernetes.io/scaleio"
I0907 04:52:33.409921       1 plugins.go:639] Loaded volume plugin "kubernetes.io/storageos"
I0907 04:52:33.409981       1 plugins.go:639] Loaded volume plugin "kubernetes.io/fc"
I0907 04:52:33.410013       1 plugins.go:639] Loaded volume plugin "kubernetes.io/iscsi"
I0907 04:52:33.410068       1 plugins.go:639] Loaded volume plugin "kubernetes.io/rbd"
I0907 04:52:33.410162       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 04:52:33.410189       1 plugins.go:639] Loaded volume plugin "kubernetes.io/csi"
I0907 04:52:33.410460       1 controllermanager.go:574] Started "attachdetach"
I0907 04:52:33.410492       1 controllermanager.go:559] Starting "replicationcontroller"
I0907 04:52:33.410592       1 attach_detach_controller.go:328] Starting attach detach controller
I0907 04:52:33.410652       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0907 04:52:33.558430       1 controllermanager.go:574] Started "replicationcontroller"
... skipping 316 lines ...
I0907 04:52:37.704191       1 certificate_controller.go:82] Adding certificate request csr-n4r56
I0907 04:52:37.704211       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (2.7µs)
I0907 04:52:37.704222       1 certificate_controller.go:82] Adding certificate request csr-n4r56
I0907 04:52:37.704238       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (2.301µs)
I0907 04:52:37.704248       1 certificate_controller.go:82] Adding certificate request csr-n4r56
I0907 04:52:37.738493       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (34.214624ms)
I0907 04:52:37.738529       1 certificate_controller.go:151] Sync csr-n4r56 failed with : recognized csr "csr-n4r56" as [selfnodeclient nodeclient] but subject access review was not approved
I0907 04:52:37.942613       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (3.598534ms)
I0907 04:52:37.942655       1 certificate_controller.go:151] Sync csr-n4r56 failed with : recognized csr "csr-n4r56" as [selfnodeclient nodeclient] but subject access review was not approved
I0907 04:52:38.064137       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="82.805µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:42990" resp=200
I0907 04:52:38.348674       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (4.770109ms)
I0907 04:52:38.348740       1 certificate_controller.go:151] Sync csr-n4r56 failed with : recognized csr "csr-n4r56" as [selfnodeclient nodeclient] but subject access review was not approved
I0907 04:52:38.474889       1 controller.go:693] Ignoring node capz-m4haq0-control-plane-z4nt5 with Ready condition status False
I0907 04:52:38.475112       1 controller.go:272] Triggering nodeSync
I0907 04:52:38.475205       1 controller.go:291] nodeSync has been triggered
I0907 04:52:38.475292       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 04:52:38.475372       1 controller.go:790] Finished updateLoadBalancerHosts
I0907 04:52:38.475421       1 controller.go:731] It took 0.000132109 seconds to finish nodeSyncInternal
I0907 04:52:38.475686       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-control-plane-z4nt5"
W0907 04:52:38.475792       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-m4haq0-control-plane-z4nt5" does not exist
I0907 04:52:38.475699       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-m4haq0-control-plane-z4nt5}
I0907 04:52:38.475954       1 taint_manager.go:440] "Updating known taints on node" node="capz-m4haq0-control-plane-z4nt5" taints=[]
I0907 04:52:38.515173       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-control-plane-z4nt5"
I0907 04:52:38.525769       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-control-plane-z4nt5"
I0907 04:52:38.526069       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-m4haq0-control-plane-z4nt5" new_ttl="0s"
I0907 04:52:39.156060       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (6.948546ms)
I0907 04:52:39.156129       1 certificate_controller.go:151] Sync csr-n4r56 failed with : recognized csr "csr-n4r56" as [selfnodeclient nodeclient] but subject access review was not approved
I0907 04:52:39.861076       1 node_lifecycle_controller.go:770] Controller observed a new Node: "capz-m4haq0-control-plane-z4nt5"
I0907 04:52:39.861116       1 controller_utils.go:172] Recording Registered Node capz-m4haq0-control-plane-z4nt5 in Controller event message for node capz-m4haq0-control-plane-z4nt5
I0907 04:52:39.861148       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: eastus::eastus-1
W0907 04:52:39.861406       1 node_lifecycle_controller.go:1013] Missing timestamp for Node capz-m4haq0-control-plane-z4nt5. Assuming now as a timestamp.
I0907 04:52:39.861447       1 node_lifecycle_controller.go:869] Node capz-m4haq0-control-plane-z4nt5 is NotReady as of 2022-09-07 04:52:39.861435045 +0000 UTC m=+19.391837436. Adding it to the Taint queue.
I0907 04:52:39.861584       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
... skipping 9 lines ...
I0907 04:52:40.734317       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 04:52:40.734327       1 controller.go:790] Finished updateLoadBalancerHosts
I0907 04:52:40.734335       1 controller.go:731] It took 2.1201e-05 seconds to finish nodeSyncInternal
I0907 04:52:40.734629       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-control-plane-z4nt5"
I0907 04:52:40.741257       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-oj9sg7" (55.904µs)
I0907 04:52:40.761498       1 certificate_controller.go:173] Finished syncing certificate request "csr-n4r56" (4.780606ms)
I0907 04:52:40.762010       1 certificate_controller.go:151] Sync csr-n4r56 failed with : recognized csr "csr-n4r56" as [selfnodeclient nodeclient] but subject access review was not approved
I0907 04:52:41.208954       1 deployment_controller.go:169] "Adding deployment" deployment="kube-system/coredns"
I0907 04:52:41.209062       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 04:52:41.209022112 +0000 UTC m=+20.739424503"
I0907 04:52:41.209591       1 deployment_util.go:261] Updating replica set "coredns-558bd4d5db" revision to 1
I0907 04:52:41.226768       1 endpoints_controller.go:555] Update endpoints for kube-system/kube-dns, ready: 0 not ready: 0
I0907 04:52:41.244178       1 deployment_controller.go:215] "ReplicaSet added" replicaSet="kube-system/coredns-558bd4d5db"
I0907 04:52:41.244233       1 replica_set.go:286] Adding ReplicaSet kube-system/coredns-558bd4d5db
... skipping 7 lines ...
I0907 04:52:41.301394       1 endpointslicemirroring_controller.go:270] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (96.206µs)
I0907 04:52:41.312759       1 endpoints_controller.go:381] Finished syncing service "kube-system/kube-dns" endpoints. (86.014173ms)
I0907 04:52:41.313080       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 04:52:41.244239653 +0000 UTC m=+20.774641944 - now: 2022-09-07 04:52:41.313068532 +0000 UTC m=+20.843470923]
I0907 04:52:41.314236       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 04:52:41.318242       1 daemon_controller.go:227] Adding daemon set kube-proxy
I0907 04:52:41.327933       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="118.891065ms"
I0907 04:52:41.328031       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 04:52:41.328131       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 04:52:41.328093188 +0000 UTC m=+20.858495479"
I0907 04:52:41.328988       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 04:52:41 +0000 UTC - now: 2022-09-07 04:52:41.328978044 +0000 UTC m=+20.859380435]
I0907 04:52:41.336027       1 endpoints_controller.go:381] Finished syncing service "kube-system/kube-dns" endpoints. (24.601µs)
I0907 04:52:41.336122       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/coredns-558bd4d5db-w9crw" podUID=2dabb3be-5ea8-4e23-be7e-b6fba975d146
I0907 04:52:41.336172       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/coredns-558bd4d5db-w9crw"
I0907 04:52:41.336194       1 replica_set.go:376] Pod coredns-558bd4d5db-w9crw created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-558bd4d5db-w9crw", GenerateName:"coredns-558bd4d5db-", Namespace:"kube-system", SelfLink:"", UID:"2dabb3be-5ea8-4e23-be7e-b6fba975d146", ResourceVersion:"417", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63798123161, loc:(*time.Location)(0x731ea80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"558bd4d5db"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"coredns-558bd4d5db", UID:"c17303ec-af29-4375-b20a-7408808e960a", Controller:(*bool)(0xc002749897), BlockOwnerDeletion:(*bool)(0xc002749898)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002744b10), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002744b28)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"config-volume", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001236940), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-xpjgj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00265f8a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"coredns", Image:"k8s.gcr.io/coredns/coredns:v1.8.0", Command:[]string(nil), Args:[]string{"-conf", "/etc/coredns/Corefile"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"dns", HostPort:0, ContainerPort:53, Protocol:"UDP", HostIP:""}, v1.ContainerPort{Name:"dns-tcp", HostPort:0, ContainerPort:53, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:0, ContainerPort:9153, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:178257920, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"170Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:73400320, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"70Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"config-volume", ReadOnly:true, MountPath:"/etc/coredns", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-xpjgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc001236ac0), ReadinessProbe:(*v1.Probe)(0xc001236b00), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0011d48a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0027499e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"coredns", DeprecatedServiceAccount:"coredns", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00014d7a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002749a50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002749a70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002749a78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002749a7c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001a743e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
... skipping 339 lines ...
I0907 04:52:57.435763       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-8c95fb79b"
I0907 04:52:57.437872       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (47.393532ms)
I0907 04:52:57.438324       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0be26ca5747c4fb, ext:36920981742, loc:(*time.Location)(0x731ea80)}}
I0907 04:52:57.438611       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-8c95fb79b, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0907 04:52:57.445684       1 endpointslice_controller.go:318] Finished syncing service "kube-system/metrics-server" endpoint slices. (35.897848ms)
I0907 04:52:57.469046       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="95.185891ms"
I0907 04:52:57.469329       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0907 04:52:57.469585       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-07 04:52:57.469555537 +0000 UTC m=+36.999957928"
I0907 04:52:57.470713       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-07 04:52:57 +0000 UTC - now: 2022-09-07 04:52:57.470684214 +0000 UTC m=+37.001086605]
I0907 04:52:57.476888       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-8c95fb79b"
I0907 04:52:57.479136       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (40.799583ms)
I0907 04:52:57.479343       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0be26ca5747c4fb, ext:36920981742, loc:(*time.Location)(0x731ea80)}}
I0907 04:52:57.479581       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (246.516µs)
... skipping 46 lines ...
I0907 04:52:59.066087       1 disruption.go:433] updatePod "calico-kube-controllers-969cf87c4-8m59c" -> PDB "calico-kube-controllers"
I0907 04:52:59.065847       1 replica_set.go:439] Pod calico-kube-controllers-969cf87c4-8m59c updated, objectMeta {Name:calico-kube-controllers-969cf87c4-8m59c GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:1a7dad9e-3b5e-40ee-8ea1-50b7d430f50e ResourceVersion:606 Generation:0 CreationTimestamp:2022-09-07 04:52:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:77870fa4-6b7e-4f14-81fe-3c15d2beefb0 Controller:0xc0020f17d7 BlockOwnerDeletion:0xc0020f17d8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77870fa4-6b7e-4f14-81fe-3c15d2beefb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}}]} -> {Name:calico-kube-controllers-969cf87c4-8m59c GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:1a7dad9e-3b5e-40ee-8ea1-50b7d430f50e ResourceVersion:612 Generation:0 CreationTimestamp:2022-09-07 04:52:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:77870fa4-6b7e-4f14-81fe-3c15d2beefb0 Controller:0xc0021088e0 BlockOwnerDeletion:0xc0021088e1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77870fa4-6b7e-4f14-81fe-3c15d2beefb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]}.
I0907 04:52:59.068442       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (17.671998ms)
I0907 04:52:59.068770       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0be26cab9bf339a, ext:38499234289, loc:(*time.Location)(0x731ea80)}}
I0907 04:52:59.069084       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (320.622µs)
I0907 04:52:59.082084       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="121.945178ms"
I0907 04:52:59.082126       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0907 04:52:59.082168       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 04:52:59.082148289 +0000 UTC m=+38.612550680"
I0907 04:52:59.089183       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="7.010576ms"
I0907 04:52:59.089664       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 04:52:59.089639097 +0000 UTC m=+38.620041488"
I0907 04:52:59.089578       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0907 04:52:59.091216       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-07 04:52:59 +0000 UTC - now: 2022-09-07 04:52:59.091206403 +0000 UTC m=+38.621608794]
I0907 04:52:59.091424       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 599s
... skipping 111 lines ...
I0907 04:53:05.036512       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (14h0m38.912640945s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0907 04:53:05.036549       1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0907 04:53:05.136738       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0907 04:53:05.235867       1 shared_informer.go:270] caches populated
I0907 04:53:05.235894       1 shared_informer.go:247] Caches are synced for resource quota 
I0907 04:53:05.235906       1 resource_quota_controller.go:454] synced quota controller
W0907 04:53:05.483746       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 04:53:05.484165       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0907 04:53:05.484181       1 garbagecollector.go:219] reset restmapper
E0907 04:53:05.522407       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0907 04:53:05.529944       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0907 04:53:05.531371       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=bgppeers", kind "crd.projectcalico.org/v1, Kind=BGPPeer"
I0907 04:53:05.531442       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=globalnetworksets", kind "crd.projectcalico.org/v1, Kind=GlobalNetworkSet"
... skipping 199 lines ...
I0907 04:53:29.345915       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be26d25499ec7d, ext:68876034260, loc:(*time.Location)(0x731ea80)}}
I0907 04:53:29.346082       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be26d254a0b7ae, ext:68876479493, loc:(*time.Location)(0x731ea80)}}
I0907 04:53:29.346206       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0907 04:53:29.346336       1 daemon_controller.go:1030] Pods to delete for daemon set calico-node: [], deleting 0
I0907 04:53:29.346418       1 daemon_controller.go:1103] Updating daemon set status
I0907 04:53:29.346490       1 daemon_controller.go:1163] Finished syncing daemon set "kube-system/calico-node" (2.98682ms)
I0907 04:53:29.869501       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-m4haq0-control-plane-z4nt5 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 04:53:16 +0000 UTC,LastTransitionTime:2022-09-07 04:52:10 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 04:53:26 +0000 UTC,LastTransitionTime:2022-09-07 04:53:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 04:53:29.869630       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-control-plane-z4nt5 ReadyCondition updated. Updating timestamp.
I0907 04:53:29.869665       1 node_lifecycle_controller.go:893] Node capz-m4haq0-control-plane-z4nt5 is healthy again, removing all taints
I0907 04:53:29.869687       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0907 04:53:31.186050       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/coredns-558bd4d5db-829x6"
I0907 04:53:31.186109       1 replica_set.go:439] Pod coredns-558bd4d5db-829x6 updated, objectMeta {Name:coredns-558bd4d5db-829x6 GenerateName:coredns-558bd4d5db- Namespace:kube-system SelfLink: UID:1dc0fcc2-5240-490b-b0b9-a85fc95792a4 ResourceVersion:435 Generation:0 CreationTimestamp:2022-09-07 04:52:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:558bd4d5db] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-558bd4d5db UID:c17303ec-af29-4375-b20a-7408808e960a Controller:0xc0027ffe1f BlockOwnerDeletion:0xc0027ffe40}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c17303ec-af29-4375-b20a-7408808e960a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]} -> {Name:coredns-558bd4d5db-829x6 GenerateName:coredns-558bd4d5db- Namespace:kube-system SelfLink: UID:1dc0fcc2-5240-490b-b0b9-a85fc95792a4 ResourceVersion:724 Generation:0 CreationTimestamp:2022-09-07 04:52:41 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:558bd4d5db] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-558bd4d5db UID:c17303ec-af29-4375-b20a-7408808e960a Controller:0xc001f53dd7 BlockOwnerDeletion:0xc001f53dd8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c17303ec-af29-4375-b20a-7408808e960a\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:41 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]}.
I0907 04:53:31.186299       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-558bd4d5db", timestamp:time.Time{wall:0xc0be26c64e8ff7fe, ext:20774718549, loc:(*time.Location)(0x731ea80)}}
... skipping 138 lines ...
I0907 04:53:35.865142       1 replica_set.go:439] Pod metrics-server-8c95fb79b-zfj6f updated, objectMeta {Name:metrics-server-8c95fb79b-zfj6f GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:4cec04fc-c309-4c1e-bbf2-b7331cc6648e ResourceVersion:766 Generation:0 CreationTimestamp:2022-09-07 04:52:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:67590089-aa43-47ec-a2b8-b0333b3f8d42 Controller:0xc00218e4f7 BlockOwnerDeletion:0xc00218e4f8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67590089-aa43-47ec-a2b8-b0333b3f8d42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 04:53:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]} -> {Name:metrics-server-8c95fb79b-zfj6f GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:4cec04fc-c309-4c1e-bbf2-b7331cc6648e ResourceVersion:773 Generation:0 CreationTimestamp:2022-09-07 04:52:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:37db549e4ff51e210e4006a68acb8a24074eaaf6a1b147c1df6459f232b67b65 cni.projectcalico.org/podIP:192.168.134.68/32 cni.projectcalico.org/podIPs:192.168.134.68/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:67590089-aa43-47ec-a2b8-b0333b3f8d42 Controller:0xc002181860 BlockOwnerDeletion:0xc002181861}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"67590089-aa43-47ec-a2b8-b0333b3f8d42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 04:53:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 04:53:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]}.
I0907 04:53:35.865362       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0be26ca5747c4fb, ext:36920981742, loc:(*time.Location)(0x731ea80)}}
I0907 04:53:35.865470       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (116.408µs)
I0907 04:53:35.865523       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-zfj6f"
I0907 04:53:35.865550       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-zfj6f, PodDisruptionBudget controller will avoid syncing.
I0907 04:53:35.865557       1 disruption.go:430] No matching pdb for pod "metrics-server-8c95fb79b-zfj6f"
W0907 04:53:36.686719       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 04:53:42.235859       1 replica_set.go:439] Pod calico-kube-controllers-969cf87c4-8m59c updated, objectMeta {Name:calico-kube-controllers-969cf87c4-8m59c GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:1a7dad9e-3b5e-40ee-8ea1-50b7d430f50e ResourceVersion:771 Generation:0 CreationTimestamp:2022-09-07 04:52:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[cni.projectcalico.org/containerID:ac5c7ac996b9ea3cf6c07e564a9db6bbb5bbb86f1592e2bc66383e5ab9f55486 cni.projectcalico.org/podIP:192.168.134.67/32 cni.projectcalico.org/podIPs:192.168.134.67/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:77870fa4-6b7e-4f14-81fe-3c15d2beefb0 Controller:0xc002180fc7 BlockOwnerDeletion:0xc002180fc8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77870fa4-6b7e-4f14-81fe-3c15d2beefb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 04:53:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 04:53:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]} -> {Name:calico-kube-controllers-969cf87c4-8m59c GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:1a7dad9e-3b5e-40ee-8ea1-50b7d430f50e ResourceVersion:789 Generation:0 CreationTimestamp:2022-09-07 04:52:58 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[cni.projectcalico.org/containerID:ac5c7ac996b9ea3cf6c07e564a9db6bbb5bbb86f1592e2bc66383e5ab9f55486 cni.projectcalico.org/podIP:192.168.134.67/32 cni.projectcalico.org/podIPs:192.168.134.67/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:77870fa4-6b7e-4f14-81fe-3c15d2beefb0 Controller:0xc000fc29e7 BlockOwnerDeletion:0xc000fc29e8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 04:52:58 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77870fa4-6b7e-4f14-81fe-3c15d2beefb0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{".":{},"f:kubernetes.io/os":{}},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 04:52:59 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 04:53:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}}} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 04:53:42 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.134.67\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]}.
I0907 04:53:42.236090       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0be26cab9bf339a, ext:38499234289, loc:(*time.Location)(0x731ea80)}}
I0907 04:53:42.236217       1 replica_set.go:649] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (136.21µs)
I0907 04:53:42.236262       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-8m59c"
I0907 04:53:42.236287       1 disruption.go:433] updatePod "calico-kube-controllers-969cf87c4-8m59c" -> PDB "calico-kube-controllers"
I0907 04:53:42.236379       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (33.903µs)
... skipping 105 lines ...
I0907 04:54:18.702543       1 certificate_controller.go:173] Finished syncing certificate request "csr-qsdrc" (2.1µs)
I0907 04:54:19.693643       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:54:19.814240       1 pv_controller_base.go:528] resyncing PV controller
I0907 04:54:23.904168       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-m4haq0-md-0-4c66f}
I0907 04:54:23.904209       1 taint_manager.go:440] "Updating known taints on node" node="capz-m4haq0-md-0-4c66f" taints=[]
I0907 04:54:23.904166       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
W0907 04:54:23.904411       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-m4haq0-md-0-4c66f" does not exist
I0907 04:54:23.905242       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be26c8dc1f73ec, ext:31002225631, loc:(*time.Location)(0x731ea80)}}
I0907 04:54:23.905424       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be26dff5f78d4a, ext:123435818401, loc:(*time.Location)(0x731ea80)}}
I0907 04:54:23.905491       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-m4haq0-md-0-4c66f], creating 1
I0907 04:54:23.906137       1 controller.go:693] Ignoring node capz-m4haq0-md-0-4c66f with Ready condition status False
I0907 04:54:23.906166       1 controller.go:272] Triggering nodeSync
I0907 04:54:23.906202       1 controller.go:291] nodeSync has been triggered
... skipping 173 lines ...
I0907 04:54:30.802421       1 controller.go:776] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 04:54:30.802486       1 controller.go:790] Finished updateLoadBalancerHosts
I0907 04:54:30.802572       1 controller.go:731] It took 0.00014361 seconds to finish nodeSyncInternal
I0907 04:54:30.806119       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-m4haq0-md-0-7czdt}
I0907 04:54:30.806336       1 taint_manager.go:440] "Updating known taints on node" node="capz-m4haq0-md-0-7czdt" taints=[]
I0907 04:54:30.806449       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
W0907 04:54:30.806535       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-m4haq0-md-0-7czdt" does not exist
I0907 04:54:30.807308       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be26e0823c1e27, ext:125567896602, loc:(*time.Location)(0x731ea80)}}
I0907 04:54:30.807760       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be26e1b025531e, ext:130338154869, loc:(*time.Location)(0x731ea80)}}
I0907 04:54:30.807955       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set kube-proxy: [capz-m4haq0-md-0-7czdt], creating 1
I0907 04:54:30.808692       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be26e09b37610a, ext:125987016445, loc:(*time.Location)(0x731ea80)}}
I0907 04:54:30.809081       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be26e1b03982aa, ext:130339477761, loc:(*time.Location)(0x731ea80)}}
I0907 04:54:30.809267       1 daemon_controller.go:968] Nodes needing daemon pods for daemon set calico-node: [capz-m4haq0-md-0-7czdt], creating 1
... skipping 364 lines ...
I0907 04:54:53.997212       1 controller.go:748] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0907 04:54:53.997336       1 controller.go:731] It took 0.001179684 seconds to finish nodeSyncInternal
I0907 04:54:54.010306       1 controller_utils.go:221] Made sure that Node capz-m4haq0-md-0-4c66f has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0907 04:54:54.010578       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 04:54:54.721414       1 gc_controller.go:161] GC'ing orphaned
I0907 04:54:54.721453       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 04:54:54.883955       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-m4haq0-md-0-4c66f transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 04:54:33 +0000 UTC,LastTransitionTime:2022-09-07 04:54:23 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 04:54:53 +0000 UTC,LastTransitionTime:2022-09-07 04:54:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 04:54:54.884125       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 04:54:54.899220       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-m4haq0-md-0-4c66f}
I0907 04:54:54.899581       1 taint_manager.go:440] "Updating known taints on node" node="capz-m4haq0-md-0-4c66f" taints=[]
I0907 04:54:54.899756       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-m4haq0-md-0-4c66f"
I0907 04:54:54.900178       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 04:54:54.900532       1 node_lifecycle_controller.go:893] Node capz-m4haq0-md-0-4c66f is healthy again, removing all taints
... skipping 59 lines ...
I0907 04:55:01.042546       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 04:55:01.065276       1 controller_utils.go:221] Made sure that Node capz-m4haq0-md-0-7czdt has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0907 04:55:01.066115       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 04:55:04.690993       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:55:04.695272       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:55:04.817205       1 pv_controller_base.go:528] resyncing PV controller
I0907 04:55:04.902042       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-m4haq0-md-0-7czdt transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 04:54:40 +0000 UTC,LastTransitionTime:2022-09-07 04:54:30 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 04:55:00 +0000 UTC,LastTransitionTime:2022-09-07 04:55:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 04:55:04.902111       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-7czdt ReadyCondition updated. Updating timestamp.
I0907 04:55:05.035970       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-m4haq0-md-0-7czdt}
I0907 04:55:05.036004       1 taint_manager.go:440] "Updating known taints on node" node="capz-m4haq0-md-0-7czdt" taints=[]
I0907 04:55:05.036056       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-m4haq0-md-0-7czdt"
I0907 04:55:05.036136       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 04:55:05.036168       1 node_lifecycle_controller.go:893] Node capz-m4haq0-md-0-7czdt is healthy again, removing all taints
... skipping 347 lines ...
I0907 04:57:35.320314       1 pv_controller.go:1108] reclaimVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: policy is Delete
I0907 04:57:35.320321       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]]
I0907 04:57:35.320325       1 pv_controller.go:1764] operation "delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]" is already running, skipping
I0907 04:57:35.320344       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960] started
I0907 04:57:35.334156       1 pv_controller.go:1341] isVolumeReleased[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is released
I0907 04:57:35.334596       1 pv_controller.go:1405] doDeleteVolume [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]
I0907 04:57:35.362223       1 pv_controller.go:1260] deletion of volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 04:57:35.362508       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: set phase Failed
I0907 04:57:35.362720       1 pv_controller.go:858] updating PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: set phase Failed
I0907 04:57:35.377482       1 pv_protection_controller.go:205] Got event on PV pvc-5d3a64f3-2291-4c16-9f78-be3350d05960
I0907 04:57:35.377873       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" with version 1306
I0907 04:57:35.378921       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: phase: Failed, bound to: "azuredisk-8081/pvc-bflhk (uid: 5d3a64f3-2291-4c16-9f78-be3350d05960)", boundByController: true
I0907 04:57:35.379187       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is bound to claim azuredisk-8081/pvc-bflhk
I0907 04:57:35.379375       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: claim azuredisk-8081/pvc-bflhk not found
I0907 04:57:35.379546       1 pv_controller.go:1108] reclaimVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: policy is Delete
I0907 04:57:35.379721       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]]
I0907 04:57:35.378847       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" with version 1306
I0907 04:57:35.380074       1 pv_controller.go:879] volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" entered phase "Failed"
I0907 04:57:35.380218       1 pv_controller.go:901] volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 04:57:35.380027       1 pv_controller.go:1764] operation "delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]" is already running, skipping
E0907 04:57:35.380573       1 goroutinemap.go:150] Operation for "delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]" failed. No retries permitted until 2022-09-07 04:57:35.880262701 +0000 UTC m=+315.410665092 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 04:57:35.380749       1 event.go:291] "Event occurred" object="pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 04:57:35.473600       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 04:57:35.574571       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="81.403µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:58710" resp=200
I0907 04:57:37.684968       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0907 04:57:41.248975       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 04:57:41.249022       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960 to the node "capz-m4haq0-md-0-7czdt" mounted false
... skipping 9 lines ...
I0907 04:57:44.737392       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 11 items received
I0907 04:57:45.066431       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-7czdt ReadyCondition updated. Updating timestamp.
I0907 04:57:45.574213       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="81.904µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:52646" resp=200
I0907 04:57:49.702318       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:57:49.825111       1 pv_controller_base.go:528] resyncing PV controller
I0907 04:57:49.825185       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" with version 1306
I0907 04:57:49.825231       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: phase: Failed, bound to: "azuredisk-8081/pvc-bflhk (uid: 5d3a64f3-2291-4c16-9f78-be3350d05960)", boundByController: true
I0907 04:57:49.825280       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is bound to claim azuredisk-8081/pvc-bflhk
I0907 04:57:49.825304       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: claim azuredisk-8081/pvc-bflhk not found
I0907 04:57:49.825313       1 pv_controller.go:1108] reclaimVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: policy is Delete
I0907 04:57:49.825330       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]]
I0907 04:57:49.825367       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960] started
I0907 04:57:49.833547       1 pv_controller.go:1341] isVolumeReleased[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is released
I0907 04:57:49.833569       1 pv_controller.go:1405] doDeleteVolume [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]
I0907 04:57:49.833610       1 pv_controller.go:1260] deletion of volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960) since it's in attaching or detaching state
I0907 04:57:49.833647       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: set phase Failed
I0907 04:57:49.833658       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: phase Failed already set
E0907 04:57:49.833694       1 goroutinemap.go:150] Operation for "delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]" failed. No retries permitted until 2022-09-07 04:57:50.833667646 +0000 UTC m=+330.364070037 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960) since it's in attaching or detaching state"
I0907 04:57:52.314298       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 93 items received
I0907 04:57:52.724963       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 11 items received
I0907 04:57:54.722732       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 16 items received
I0907 04:57:54.728292       1 gc_controller.go:161] GC'ing orphaned
I0907 04:57:54.728319       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 04:57:55.575488       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="84.905µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54882" resp=200
... skipping 8 lines ...
I0907 04:58:01.695343       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 96 items received
I0907 04:58:04.313101       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 16 items received
I0907 04:58:04.694589       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:58:04.702738       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:58:04.825216       1 pv_controller_base.go:528] resyncing PV controller
I0907 04:58:04.825443       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" with version 1306
I0907 04:58:04.825563       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: phase: Failed, bound to: "azuredisk-8081/pvc-bflhk (uid: 5d3a64f3-2291-4c16-9f78-be3350d05960)", boundByController: true
I0907 04:58:04.825662       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is bound to claim azuredisk-8081/pvc-bflhk
I0907 04:58:04.825733       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: claim azuredisk-8081/pvc-bflhk not found
I0907 04:58:04.825765       1 pv_controller.go:1108] reclaimVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: policy is Delete
I0907 04:58:04.825810       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]]
I0907 04:58:04.825874       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960] started
I0907 04:58:04.835311       1 pv_controller.go:1341] isVolumeReleased[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is released
... skipping 4 lines ...
I0907 04:58:10.015342       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960
I0907 04:58:10.015394       1 pv_controller.go:1436] volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" deleted
I0907 04:58:10.015445       1 pv_controller.go:1284] deleteVolumeOperation [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: success
I0907 04:58:10.026597       1 pv_protection_controller.go:205] Got event on PV pvc-5d3a64f3-2291-4c16-9f78-be3350d05960
I0907 04:58:10.026721       1 pv_protection_controller.go:125] Processing PV pvc-5d3a64f3-2291-4c16-9f78-be3350d05960
I0907 04:58:10.027230       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" with version 1360
I0907 04:58:10.027299       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: phase: Failed, bound to: "azuredisk-8081/pvc-bflhk (uid: 5d3a64f3-2291-4c16-9f78-be3350d05960)", boundByController: true
I0907 04:58:10.027338       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: volume is bound to claim azuredisk-8081/pvc-bflhk
I0907 04:58:10.027421       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: claim azuredisk-8081/pvc-bflhk not found
I0907 04:58:10.027441       1 pv_controller.go:1108] reclaimVolume[pvc-5d3a64f3-2291-4c16-9f78-be3350d05960]: policy is Delete
I0907 04:58:10.027620       1 pv_controller.go:1753] scheduleOperation[delete-pvc-5d3a64f3-2291-4c16-9f78-be3350d05960[50d4da78-0605-4130-8e59-e631d6976ace]]
I0907 04:58:10.027721       1 pv_controller.go:1232] deleteVolumeOperation [pvc-5d3a64f3-2291-4c16-9f78-be3350d05960] started
I0907 04:58:10.037580       1 pv_controller.go:1244] Volume "pvc-5d3a64f3-2291-4c16-9f78-be3350d05960" is already being deleted
... skipping 115 lines ...
I0907 04:58:15.729998       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0907 04:58:15.750978       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8081, name kube-root-ca.crt, uid b72f3630-7572-4772-ada1-b787ddb2e221, event type delete
I0907 04:58:15.751887       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" lun 0 to node "capz-m4haq0-md-0-4c66f".
I0907 04:58:15.752131       1 azure_controller_standard.go:93] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - attach disk(capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) with DiskEncryptionSetID()
I0907 04:58:15.758895       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-wgp9q, uid e30466a2-e982-4567-bfd2-ea42bf58c65b, event type delete
I0907 04:58:15.760266       1 publisher.go:181] Finished syncing namespace "azuredisk-8081" (9.446599ms)
E0907 04:58:15.773828       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-db7wj" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0907 04:58:15.796033       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0907 04:58:15.797045       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8081, name default, uid ec7d1c8b-78f1-45a7-a8b2-f3497a48ea8e, event type delete
I0907 04:58:15.797242       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.8µs)
I0907 04:58:15.884057       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-hlmkj.17127b266b34f8a9, uid e83a9c96-1349-4962-9f54-4941c532989a, event type delete
I0907 04:58:15.889370       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-hlmkj.17127b2a3394a25b, uid a6f0795e-3dc5-4a4a-87c7-e6e7a5792bc9, event type delete
I0907 04:58:15.902939       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-hlmkj.17127b2ea2697388, uid b01b2a06-e852-4078-a23f-abce8b4b9c06, event type delete
... skipping 4 lines ...
I0907 04:58:15.923253       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name pvc-bflhk.17127b264f8bc1fb, uid 90013966-d878-4f7b-830b-440170ba962d, event type delete
I0907 04:58:15.938413       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.1µs)
I0907 04:58:15.940839       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8081, estimate: 0, errors: <nil>
I0907 04:58:15.963553       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (236.569596ms)
I0907 04:58:16.353504       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0907 04:58:16.390348       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-zlg8z, uid 21c6875b-522b-47e6-8313-d1d10716164d, event type delete
E0907 04:58:16.406558       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-7n5p5" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0907 04:58:16.441394       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0907 04:58:16.441601       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid a4ff2784-f2cb-4805-8550-8d3444b0a6a9, event type delete
I0907 04:58:16.441682       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3.4µs)
I0907 04:58:16.500278       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid 01d2ff35-5f33-4583-8056-6310b589fb9b, event type delete
I0907 04:58:16.501521       1 publisher.go:181] Finished syncing namespace "azuredisk-2540" (1.495695ms)
I0907 04:58:16.531212       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3µs)
I0907 04:58:16.531533       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0907 04:58:16.541595       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (191.501639ms)
I0907 04:58:16.984120       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0907 04:58:17.037300       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-s5z66, uid 09b0dd14-1bfe-4ad1-965b-b880071d190e, event type delete
E0907 04:58:17.053096       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-2bf5p" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0907 04:58:17.055241       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid f6e55814-a872-4c24-8a9e-da71f814cfdc, event type delete
I0907 04:58:17.058242       1 publisher.go:181] Finished syncing namespace "azuredisk-4728" (3.199803ms)
I0907 04:58:17.068122       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0907 04:58:17.068425       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid 4c65550d-eebf-45ef-926d-70af81304b86, event type delete
I0907 04:58:17.068460       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.6µs)
I0907 04:58:17.163538       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.8µs)
... skipping 184 lines ...
I0907 04:58:54.046854       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: claim azuredisk-5466/pvc-smz89 not found
I0907 04:58:54.046870       1 pv_controller.go:1108] reclaimVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: policy is Delete
I0907 04:58:54.046884       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]]
I0907 04:58:54.046891       1 pv_controller.go:1764] operation "delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]" is already running, skipping
I0907 04:58:54.050729       1 pv_controller.go:1341] isVolumeReleased[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is released
I0907 04:58:54.050747       1 pv_controller.go:1405] doDeleteVolume [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]
I0907 04:58:54.078838       1 pv_controller.go:1260] deletion of volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 04:58:54.078868       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: set phase Failed
I0907 04:58:54.078877       1 pv_controller.go:858] updating PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: set phase Failed
I0907 04:58:54.086011       1 pv_protection_controller.go:205] Got event on PV pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1
I0907 04:58:54.086414       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" with version 1493
I0907 04:58:54.086645       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" with version 1493
I0907 04:58:54.086841       1 pv_controller.go:879] volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" entered phase "Failed"
I0907 04:58:54.087060       1 pv_controller.go:901] volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
E0907 04:58:54.087252       1 goroutinemap.go:150] Operation for "delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]" failed. No retries permitted until 2022-09-07 04:58:54.587221633 +0000 UTC m=+394.117624024 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 04:58:54.086672       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: phase: Failed, bound to: "azuredisk-5466/pvc-smz89 (uid: e8df6a04-9e45-4aba-ac85-d0235932d7b1)", boundByController: true
I0907 04:58:54.087837       1 event.go:291] "Event occurred" object="pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 04:58:54.087996       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is bound to claim azuredisk-5466/pvc-smz89
I0907 04:58:54.088183       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: claim azuredisk-5466/pvc-smz89 not found
I0907 04:58:54.088341       1 pv_controller.go:1108] reclaimVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: policy is Delete
I0907 04:58:54.088481       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]]
I0907 04:58:54.088616       1 pv_controller.go:1766] operation "delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]" postponed due to exponential backoff
... skipping 12 lines ...
I0907 04:59:04.287223       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1"
I0907 04:59:04.287252       1 azure_controller_standard.go:166] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1)
I0907 04:59:04.695934       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:59:04.706090       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:59:04.829421       1 pv_controller_base.go:528] resyncing PV controller
I0907 04:59:04.829524       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" with version 1493
I0907 04:59:04.829603       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: phase: Failed, bound to: "azuredisk-5466/pvc-smz89 (uid: e8df6a04-9e45-4aba-ac85-d0235932d7b1)", boundByController: true
I0907 04:59:04.829675       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is bound to claim azuredisk-5466/pvc-smz89
I0907 04:59:04.829712       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: claim azuredisk-5466/pvc-smz89 not found
I0907 04:59:04.829728       1 pv_controller.go:1108] reclaimVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: policy is Delete
I0907 04:59:04.829750       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]]
I0907 04:59:04.829806       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1] started
I0907 04:59:04.833922       1 pv_controller.go:1341] isVolumeReleased[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is released
I0907 04:59:04.833947       1 pv_controller.go:1405] doDeleteVolume [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]
I0907 04:59:04.834163       1 pv_controller.go:1260] deletion of volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) since it's in attaching or detaching state
I0907 04:59:04.834184       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: set phase Failed
I0907 04:59:04.834195       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: phase Failed already set
E0907 04:59:04.834356       1 goroutinemap.go:150] Operation for "delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]" failed. No retries permitted until 2022-09-07 04:59:05.834300855 +0000 UTC m=+405.364703146 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) since it's in attaching or detaching state"
I0907 04:59:05.078187       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 04:59:05.541761       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 04:59:05.574385       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="119.308µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:55582" resp=200
I0907 04:59:09.974949       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0907 04:59:14.694556       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) returned with <nil>
I0907 04:59:14.694606       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1) succeeded
... skipping 7 lines ...
I0907 04:59:14.753400       1 controller.go:790] Finished updateLoadBalancerHosts
I0907 04:59:14.753429       1 controller.go:731] It took 2.2702e-05 seconds to finish nodeSyncInternal
I0907 04:59:15.574589       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="86.406µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53980" resp=200
I0907 04:59:19.706220       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 04:59:19.829762       1 pv_controller_base.go:528] resyncing PV controller
I0907 04:59:19.829880       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" with version 1493
I0907 04:59:19.829985       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: phase: Failed, bound to: "azuredisk-5466/pvc-smz89 (uid: e8df6a04-9e45-4aba-ac85-d0235932d7b1)", boundByController: true
I0907 04:59:19.830135       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is bound to claim azuredisk-5466/pvc-smz89
I0907 04:59:19.830266       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: claim azuredisk-5466/pvc-smz89 not found
I0907 04:59:19.830286       1 pv_controller.go:1108] reclaimVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: policy is Delete
I0907 04:59:19.830393       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]]
I0907 04:59:19.830440       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1] started
I0907 04:59:19.837691       1 pv_controller.go:1341] isVolumeReleased[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is released
I0907 04:59:19.837713       1 pv_controller.go:1405] doDeleteVolume [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]
I0907 04:59:20.867148       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0907 04:59:25.112539       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1
I0907 04:59:25.112578       1 pv_controller.go:1436] volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" deleted
I0907 04:59:25.112591       1 pv_controller.go:1284] deleteVolumeOperation [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: success
I0907 04:59:25.121919       1 pv_protection_controller.go:205] Got event on PV pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1
I0907 04:59:25.122192       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1" with version 1542
I0907 04:59:25.122236       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: phase: Failed, bound to: "azuredisk-5466/pvc-smz89 (uid: e8df6a04-9e45-4aba-ac85-d0235932d7b1)", boundByController: true
I0907 04:59:25.122314       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: volume is bound to claim azuredisk-5466/pvc-smz89
I0907 04:59:25.122371       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: claim azuredisk-5466/pvc-smz89 not found
I0907 04:59:25.122388       1 pv_controller.go:1108] reclaimVolume[pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1]: policy is Delete
I0907 04:59:25.122408       1 pv_controller.go:1753] scheduleOperation[delete-pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1[98bb956c-e7a3-490e-a8f3-7a1c004d5521]]
I0907 04:59:25.122473       1 pv_controller.go:1232] deleteVolumeOperation [pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1] started
I0907 04:59:25.122612       1 pv_protection_controller.go:125] Processing PV pvc-e8df6a04-9e45-4aba-ac85-d0235932d7b1
... skipping 275 lines ...
I0907 04:59:51.774701       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: claim azuredisk-2790/pvc-nhr4g not found
I0907 04:59:51.774806       1 pv_controller.go:1108] reclaimVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: policy is Delete
I0907 04:59:51.774911       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]]
I0907 04:59:51.775097       1 pv_controller.go:1764] operation "delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]" is already running, skipping
I0907 04:59:51.777058       1 pv_controller.go:1341] isVolumeReleased[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is released
I0907 04:59:51.777077       1 pv_controller.go:1405] doDeleteVolume [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]
I0907 04:59:51.808643       1 pv_controller.go:1260] deletion of volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 04:59:51.808674       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: set phase Failed
I0907 04:59:51.808684       1 pv_controller.go:858] updating PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: set phase Failed
I0907 04:59:51.813442       1 pv_protection_controller.go:205] Got event on PV pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1
I0907 04:59:51.813477       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" with version 1639
I0907 04:59:51.813812       1 pv_controller.go:879] volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" entered phase "Failed"
I0907 04:59:51.813892       1 pv_controller.go:901] volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 04:59:51.813512       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" with version 1639
I0907 04:59:51.814109       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: phase: Failed, bound to: "azuredisk-2790/pvc-nhr4g (uid: f3f34dd9-0359-46ac-a14d-e3a6dffd05f1)", boundByController: true
I0907 04:59:51.814152       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is bound to claim azuredisk-2790/pvc-nhr4g
I0907 04:59:51.814206       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: claim azuredisk-2790/pvc-nhr4g not found
I0907 04:59:51.814283       1 pv_controller.go:1108] reclaimVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: policy is Delete
I0907 04:59:51.814306       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]]
I0907 04:59:51.814675       1 event.go:291] "Event occurred" object="pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
E0907 04:59:51.814729       1 goroutinemap.go:150] Operation for "delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]" failed. No retries permitted until 2022-09-07 04:59:52.314033088 +0000 UTC m=+451.844435479 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 04:59:51.814755       1 pv_controller.go:1766] operation "delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]" postponed due to exponential backoff
I0907 04:59:54.417593       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 04:59:54.417655       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 to the node "capz-m4haq0-md-0-4c66f" mounted false
I0907 04:59:54.508277       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-m4haq0-md-0-4c66f" succeeded. VolumesAttached: []
I0907 04:59:54.508635       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 04:59:54.508776       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 to the node "capz-m4haq0-md-0-4c66f" mounted false
... skipping 8 lines ...
I0907 04:59:55.085591       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 04:59:55.573959       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="80.005µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:52488" resp=200
I0907 05:00:04.696883       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:00:04.708076       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:00:04.831823       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:00:04.831907       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" with version 1639
I0907 05:00:04.831950       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: phase: Failed, bound to: "azuredisk-2790/pvc-nhr4g (uid: f3f34dd9-0359-46ac-a14d-e3a6dffd05f1)", boundByController: true
I0907 05:00:04.832048       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is bound to claim azuredisk-2790/pvc-nhr4g
I0907 05:00:04.832153       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: claim azuredisk-2790/pvc-nhr4g not found
I0907 05:00:04.832236       1 pv_controller.go:1108] reclaimVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: policy is Delete
I0907 05:00:04.832320       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]]
I0907 05:00:04.832455       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1] started
I0907 05:00:04.840123       1 pv_controller.go:1341] isVolumeReleased[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is released
I0907 05:00:04.840143       1 pv_controller.go:1405] doDeleteVolume [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]
I0907 05:00:04.840181       1 pv_controller.go:1260] deletion of volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) since it's in attaching or detaching state
I0907 05:00:04.840195       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: set phase Failed
I0907 05:00:04.840208       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: phase Failed already set
E0907 05:00:04.840243       1 goroutinemap.go:150] Operation for "delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]" failed. No retries permitted until 2022-09-07 05:00:05.840218552 +0000 UTC m=+465.370620943 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) since it's in attaching or detaching state"
I0907 05:00:05.585246       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="71.305µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:54626" resp=200
I0907 05:00:05.586562       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:00:10.062334       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) returned with <nil>
I0907 05:00:10.062381       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1) succeeded
I0907 05:00:10.062394       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1 was detached from node:capz-m4haq0-md-0-4c66f
I0907 05:00:10.062421       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1") on node "capz-m4haq0-md-0-4c66f" 
... skipping 2 lines ...
I0907 05:00:15.575047       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="91.106µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:32948" resp=200
I0907 05:00:18.072432       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0907 05:00:19.709042       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:00:19.721676       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0907 05:00:19.832678       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:00:19.832883       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" with version 1639
I0907 05:00:19.833045       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: phase: Failed, bound to: "azuredisk-2790/pvc-nhr4g (uid: f3f34dd9-0359-46ac-a14d-e3a6dffd05f1)", boundByController: true
I0907 05:00:19.833162       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is bound to claim azuredisk-2790/pvc-nhr4g
I0907 05:00:19.833252       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: claim azuredisk-2790/pvc-nhr4g not found
I0907 05:00:19.833324       1 pv_controller.go:1108] reclaimVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: policy is Delete
I0907 05:00:19.833421       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]]
I0907 05:00:19.833531       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1] started
I0907 05:00:19.839561       1 pv_controller.go:1341] isVolumeReleased[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is released
I0907 05:00:19.839583       1 pv_controller.go:1405] doDeleteVolume [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]
I0907 05:00:25.117835       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1
I0907 05:00:25.117874       1 pv_controller.go:1436] volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" deleted
I0907 05:00:25.117889       1 pv_controller.go:1284] deleteVolumeOperation [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: success
I0907 05:00:25.127408       1 pv_protection_controller.go:205] Got event on PV pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1
I0907 05:00:25.127448       1 pv_protection_controller.go:125] Processing PV pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1
I0907 05:00:25.127846       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" with version 1687
I0907 05:00:25.127894       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: phase: Failed, bound to: "azuredisk-2790/pvc-nhr4g (uid: f3f34dd9-0359-46ac-a14d-e3a6dffd05f1)", boundByController: true
I0907 05:00:25.127927       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: volume is bound to claim azuredisk-2790/pvc-nhr4g
I0907 05:00:25.127958       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: claim azuredisk-2790/pvc-nhr4g not found
I0907 05:00:25.127973       1 pv_controller.go:1108] reclaimVolume[pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1]: policy is Delete
I0907 05:00:25.127989       1 pv_controller.go:1753] scheduleOperation[delete-pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1[eb1e8891-2648-4346-aacb-a589b0e66970]]
I0907 05:00:25.128018       1 pv_controller.go:1232] deleteVolumeOperation [pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1] started
I0907 05:00:25.135344       1 pv_controller.go:1244] Volume "pvc-f3f34dd9-0359-46ac-a14d-e3a6dffd05f1" is already being deleted
... skipping 278 lines ...
I0907 05:00:52.409747       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: claim azuredisk-5356/pvc-vncjg not found
I0907 05:00:52.409902       1 pv_controller.go:1108] reclaimVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: policy is Delete
I0907 05:00:52.410017       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]]
I0907 05:00:52.410133       1 pv_controller.go:1764] operation "delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]" is already running, skipping
I0907 05:00:52.415130       1 pv_controller.go:1341] isVolumeReleased[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is released
I0907 05:00:52.415369       1 pv_controller.go:1405] doDeleteVolume [pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]
I0907 05:00:52.453406       1 pv_controller.go:1260] deletion of volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:00:52.453436       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: set phase Failed
I0907 05:00:52.453445       1 pv_controller.go:858] updating PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: set phase Failed
I0907 05:00:52.457666       1 pv_protection_controller.go:205] Got event on PV pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2
I0907 05:00:52.457702       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" with version 1786
I0907 05:00:52.457835       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: phase: Failed, bound to: "azuredisk-5356/pvc-vncjg (uid: c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2)", boundByController: true
I0907 05:00:52.457905       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is bound to claim azuredisk-5356/pvc-vncjg
I0907 05:00:52.457970       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: claim azuredisk-5356/pvc-vncjg not found
I0907 05:00:52.457983       1 pv_controller.go:1108] reclaimVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: policy is Delete
I0907 05:00:52.458021       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]]
I0907 05:00:52.458082       1 pv_controller.go:1764] operation "delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]" is already running, skipping
I0907 05:00:52.458237       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" with version 1786
I0907 05:00:52.458262       1 pv_controller.go:879] volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" entered phase "Failed"
I0907 05:00:52.458272       1 pv_controller.go:901] volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
E0907 05:00:52.458363       1 goroutinemap.go:150] Operation for "delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]" failed. No retries permitted until 2022-09-07 05:00:52.958331912 +0000 UTC m=+512.488734203 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:00:52.458557       1 event.go:291] "Event occurred" object="pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:00:54.516640       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 05:00:54.516682       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 to the node "capz-m4haq0-md-0-4c66f" mounted false
I0907 05:00:54.565856       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 05:00:54.565897       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 to the node "capz-m4haq0-md-0-4c66f" mounted false
I0907 05:00:54.566934       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-m4haq0-md-0-4c66f" succeeded. VolumesAttached: []
... skipping 16 lines ...
I0907 05:00:58.029840       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 0 items received
I0907 05:01:02.811891       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0907 05:01:04.698596       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:01:04.710908       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:01:04.834149       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:01:04.834349       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" with version 1786
I0907 05:01:04.834472       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: phase: Failed, bound to: "azuredisk-5356/pvc-vncjg (uid: c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2)", boundByController: true
I0907 05:01:04.834586       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is bound to claim azuredisk-5356/pvc-vncjg
I0907 05:01:04.834654       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: claim azuredisk-5356/pvc-vncjg not found
I0907 05:01:04.834672       1 pv_controller.go:1108] reclaimVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: policy is Delete
I0907 05:01:04.834717       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]]
I0907 05:01:04.834822       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2] started
I0907 05:01:04.842893       1 pv_controller.go:1341] isVolumeReleased[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is released
I0907 05:01:04.842916       1 pv_controller.go:1405] doDeleteVolume [pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]
I0907 05:01:04.843154       1 pv_controller.go:1260] deletion of volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) since it's in attaching or detaching state
I0907 05:01:04.843286       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: set phase Failed
I0907 05:01:04.843390       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: phase Failed already set
E0907 05:01:04.843536       1 goroutinemap.go:150] Operation for "delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]" failed. No retries permitted until 2022-09-07 05:01:05.843491326 +0000 UTC m=+525.373893717 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) since it's in attaching or detaching state"
I0907 05:01:05.574735       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="76.105µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53666" resp=200
I0907 05:01:05.635039       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:01:10.019683       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) returned with <nil>
I0907 05:01:10.019732       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2) succeeded
I0907 05:01:10.019777       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2 was detached from node:capz-m4haq0-md-0-4c66f
I0907 05:01:10.019808       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:01:14.734888       1 gc_controller.go:161] GC'ing orphaned
I0907 05:01:14.734932       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 05:01:15.574708       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="128.21µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:38830" resp=200
I0907 05:01:19.711179       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:01:19.834799       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:01:19.834904       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" with version 1786
I0907 05:01:19.835004       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: phase: Failed, bound to: "azuredisk-5356/pvc-vncjg (uid: c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2)", boundByController: true
I0907 05:01:19.835145       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is bound to claim azuredisk-5356/pvc-vncjg
I0907 05:01:19.835174       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: claim azuredisk-5356/pvc-vncjg not found
I0907 05:01:19.835243       1 pv_controller.go:1108] reclaimVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: policy is Delete
I0907 05:01:19.835289       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]]
I0907 05:01:19.835382       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2] started
I0907 05:01:19.839481       1 pv_controller.go:1341] isVolumeReleased[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is released
... skipping 3 lines ...
I0907 05:01:25.118208       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2
I0907 05:01:25.118254       1 pv_controller.go:1436] volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" deleted
I0907 05:01:25.118481       1 pv_controller.go:1284] deleteVolumeOperation [pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: success
I0907 05:01:25.126045       1 pv_protection_controller.go:205] Got event on PV pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2
I0907 05:01:25.126087       1 pv_protection_controller.go:125] Processing PV pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2
I0907 05:01:25.126374       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2" with version 1834
I0907 05:01:25.126497       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: phase: Failed, bound to: "azuredisk-5356/pvc-vncjg (uid: c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2)", boundByController: true
I0907 05:01:25.126674       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: volume is bound to claim azuredisk-5356/pvc-vncjg
I0907 05:01:25.126767       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: claim azuredisk-5356/pvc-vncjg not found
I0907 05:01:25.126803       1 pv_controller.go:1108] reclaimVolume[pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2]: policy is Delete
I0907 05:01:25.126894       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]]
I0907 05:01:25.127023       1 pv_controller.go:1764] operation "delete-pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2[6a31df77-6b7c-4c4b-b983-49e3b8a73c13]" is already running, skipping
I0907 05:01:25.132215       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-c22e64ed-76d2-4b5e-8cca-a4264c1bd7d2
... skipping 105 lines ...
I0907 05:01:31.580222       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2") from node "capz-m4haq0-md-0-7czdt" 
I0907 05:01:31.663979       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" to node "capz-m4haq0-md-0-7czdt".
I0907 05:01:31.861377       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" lun 0 to node "capz-m4haq0-md-0-7czdt".
I0907 05:01:31.861430       1 azure_controller_standard.go:93] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - attach disk(capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2) with DiskEncryptionSetID()
I0907 05:01:32.779592       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0907 05:01:32.810619       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-6827z, uid c11ac354-9fc1-4928-b5e9-427156af0296, event type delete
E0907 05:01:32.838957       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-4gw4k" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0907 05:01:32.843467       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-x5t6r.17127b581ca1b3c6, uid f6c3324f-6c38-4512-b1be-a1f17c0e3778, event type delete
I0907 05:01:32.853541       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-x5t6r.17127b5a9480b540, uid 6fd122d0-73fb-4d3d-818d-8c7c9bf66ad7, event type delete
I0907 05:01:32.866189       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-x5t6r.17127b5c7a1dad94, uid 0cfd192b-739a-444c-8f06-12611eb04669, event type delete
I0907 05:01:32.870159       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-x5t6r.17127b5c7e2d6693, uid 8cff110e-ba12-40b3-9d8f-017d6591a74e, event type delete
I0907 05:01:32.873997       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name azuredisk-volume-tester-x5t6r.17127b5c85e670af, uid 94594c30-e277-4e46-8d17-3e3c372e710e, event type delete
I0907 05:01:32.879758       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5356, name pvc-vncjg.17127b57673a8bc3, uid 0866a8cb-336c-4f62-9cfd-50f664f27865, event type delete
... skipping 861 lines ...
I0907 05:03:21.629394       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2\"}]}}" for node "capz-m4haq0-md-0-7czdt" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 0}]
I0907 05:03:21.629859       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc") on node "capz-m4haq0-md-0-7czdt" 
I0907 05:03:21.630313       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 05:03:21.630517       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2 to the node "capz-m4haq0-md-0-7czdt" mounted true
I0907 05:03:21.630718       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc to the node "capz-m4haq0-md-0-7czdt" mounted false
I0907 05:03:21.632589       1 operation_generator.go:1558] Verified volume is safe to detach for volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc") on node "capz-m4haq0-md-0-7czdt" 
I0907 05:03:21.635891       1 pv_controller.go:1260] deletion of volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:03:21.635919       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: set phase Failed
I0907 05:03:21.635930       1 pv_controller.go:858] updating PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: set phase Failed
I0907 05:03:21.640388       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" with version 2106
I0907 05:03:21.640675       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: phase: Failed, bound to: "azuredisk-5194/pvc-tjdsd (uid: 26a5da5b-8e93-4a4b-bba1-22236a62d2cc)", boundByController: true
I0907 05:03:21.640959       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: volume is bound to claim azuredisk-5194/pvc-tjdsd
I0907 05:03:21.641232       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: claim azuredisk-5194/pvc-tjdsd not found
I0907 05:03:21.641442       1 pv_controller.go:1108] reclaimVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: policy is Delete
I0907 05:03:21.641707       1 pv_controller.go:1753] scheduleOperation[delete-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc[73f3af90-38aa-4e7d-a05c-aa6d2d1b8980]]
I0907 05:03:21.641890       1 pv_controller.go:1764] operation "delete-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc[73f3af90-38aa-4e7d-a05c-aa6d2d1b8980]" is already running, skipping
I0907 05:03:21.642338       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" with version 2106
I0907 05:03:21.642616       1 pv_controller.go:879] volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" entered phase "Failed"
I0907 05:03:21.642828       1 pv_controller.go:901] volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
E0907 05:03:21.643279       1 goroutinemap.go:150] Operation for "delete-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc[73f3af90-38aa-4e7d-a05c-aa6d2d1b8980]" failed. No retries permitted until 2022-09-07 05:03:22.143245292 +0000 UTC m=+661.673647683 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:03:21.643503       1 pv_protection_controller.go:205] Got event on PV pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc
I0907 05:03:21.643631       1 event.go:291] "Event occurred" object="pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:03:21.656106       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc from node "capz-m4haq0-md-0-7czdt"
I0907 05:03:21.656146       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc"
I0907 05:03:21.656162       1 azure_controller_standard.go:166] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc)
I0907 05:03:25.126186       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-7czdt ReadyCondition updated. Updating timestamp.
... skipping 3 lines ...
I0907 05:03:34.703979       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:03:34.718403       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:03:34.738742       1 gc_controller.go:161] GC'ing orphaned
I0907 05:03:34.738773       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 05:03:34.841287       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:03:34.841388       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" with version 2106
I0907 05:03:34.841478       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: phase: Failed, bound to: "azuredisk-5194/pvc-tjdsd (uid: 26a5da5b-8e93-4a4b-bba1-22236a62d2cc)", boundByController: true
I0907 05:03:34.841556       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: volume is bound to claim azuredisk-5194/pvc-tjdsd
I0907 05:03:34.841661       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: claim azuredisk-5194/pvc-tjdsd not found
I0907 05:03:34.841738       1 pv_controller.go:1108] reclaimVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: policy is Delete
I0907 05:03:34.841688       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-6g8f7" with version 1862
I0907 05:03:34.841953       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7]: phase: Bound, bound to: "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2", bindCompleted: true, boundByController: true
I0907 05:03:34.842101       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7]: volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" found: phase: Bound, bound to: "azuredisk-5194/pvc-6g8f7 (uid: 7b368383-a34d-4e4a-9642-d950b76e9bf2)", boundByController: true
... skipping 41 lines ...
I0907 05:03:34.843796       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq found: phase: Bound, bound to: "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e", bindCompleted: true, boundByController: true
I0907 05:03:34.843816       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: all is bound
I0907 05:03:34.843823       1 pv_controller.go:858] updating PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: set phase Bound
I0907 05:03:34.843830       1 pv_controller.go:861] updating PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase Bound already set
I0907 05:03:34.850091       1 pv_controller.go:1341] isVolumeReleased[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: volume is released
I0907 05:03:34.850113       1 pv_controller.go:1405] doDeleteVolume [pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]
I0907 05:03:34.850149       1 pv_controller.go:1260] deletion of volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) since it's in attaching or detaching state
I0907 05:03:34.850162       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: set phase Failed
I0907 05:03:34.850179       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: phase Failed already set
E0907 05:03:34.850219       1 goroutinemap.go:150] Operation for "delete-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc[73f3af90-38aa-4e7d-a05c-aa6d2d1b8980]" failed. No retries permitted until 2022-09-07 05:03:35.85018822 +0000 UTC m=+675.380590611 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) since it's in attaching or detaching state"
I0907 05:03:35.573696       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="121.508µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:53932" resp=200
I0907 05:03:35.764758       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:03:42.297252       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) returned with <nil>
I0907 05:03:42.297307       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc) succeeded
I0907 05:03:42.297319       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc was detached from node:capz-m4haq0-md-0-7czdt
I0907 05:03:42.297346       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc") on node "capz-m4haq0-md-0-7czdt" 
... skipping 38 lines ...
I0907 05:03:49.843546       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is bound to claim azuredisk-5194/pvc-24zwq
I0907 05:03:49.843574       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq found: phase: Bound, bound to: "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e", bindCompleted: true, boundByController: true
I0907 05:03:49.843599       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: all is bound
I0907 05:03:49.843614       1 pv_controller.go:858] updating PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: set phase Bound
I0907 05:03:49.843628       1 pv_controller.go:861] updating PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase Bound already set
I0907 05:03:49.843648       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" with version 2106
I0907 05:03:49.843677       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: phase: Failed, bound to: "azuredisk-5194/pvc-tjdsd (uid: 26a5da5b-8e93-4a4b-bba1-22236a62d2cc)", boundByController: true
I0907 05:03:49.843708       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: volume is bound to claim azuredisk-5194/pvc-tjdsd
I0907 05:03:49.843745       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: claim azuredisk-5194/pvc-tjdsd not found
I0907 05:03:49.843759       1 pv_controller.go:1108] reclaimVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: policy is Delete
I0907 05:03:49.843781       1 pv_controller.go:1753] scheduleOperation[delete-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc[73f3af90-38aa-4e7d-a05c-aa6d2d1b8980]]
I0907 05:03:49.843831       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" with version 1860
I0907 05:03:49.843904       1 pv_controller.go:1232] deleteVolumeOperation [pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc] started
... skipping 9 lines ...
I0907 05:03:50.428121       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc
I0907 05:03:50.428160       1 pv_controller.go:1436] volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" deleted
I0907 05:03:50.428175       1 pv_controller.go:1284] deleteVolumeOperation [pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: success
I0907 05:03:50.435720       1 pv_protection_controller.go:205] Got event on PV pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc
I0907 05:03:50.435756       1 pv_protection_controller.go:125] Processing PV pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc
I0907 05:03:50.436401       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" with version 2147
I0907 05:03:50.436590       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: phase: Failed, bound to: "azuredisk-5194/pvc-tjdsd (uid: 26a5da5b-8e93-4a4b-bba1-22236a62d2cc)", boundByController: true
I0907 05:03:50.436744       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: volume is bound to claim azuredisk-5194/pvc-tjdsd
I0907 05:03:50.436858       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: claim azuredisk-5194/pvc-tjdsd not found
I0907 05:03:50.436872       1 pv_controller.go:1108] reclaimVolume[pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc]: policy is Delete
I0907 05:03:50.436891       1 pv_controller.go:1753] scheduleOperation[delete-pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc[73f3af90-38aa-4e7d-a05c-aa6d2d1b8980]]
I0907 05:03:50.437025       1 pv_controller.go:1232] deleteVolumeOperation [pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc] started
I0907 05:03:50.446562       1 pv_controller.go:1244] Volume "pvc-26a5da5b-8e93-4a4b-bba1-22236a62d2cc" is already being deleted
... skipping 196 lines ...
I0907 05:04:26.361822       1 pv_controller.go:1108] reclaimVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: policy is Delete
I0907 05:04:26.361946       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]]
I0907 05:04:26.361989       1 pv_controller.go:1764] operation "delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]" is already running, skipping
I0907 05:04:26.361770       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e] started
I0907 05:04:26.364241       1 pv_controller.go:1341] isVolumeReleased[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is released
I0907 05:04:26.364268       1 pv_controller.go:1405] doDeleteVolume [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]
I0907 05:04:26.392793       1 pv_controller.go:1260] deletion of volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:04:26.392850       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: set phase Failed
I0907 05:04:26.392862       1 pv_controller.go:858] updating PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: set phase Failed
I0907 05:04:26.396156       1 pv_protection_controller.go:205] Got event on PV pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e
I0907 05:04:26.396404       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" with version 2213
I0907 05:04:26.396681       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase: Failed, bound to: "azuredisk-5194/pvc-24zwq (uid: 4a1fade5-64cb-480e-98b0-fa4435bf340e)", boundByController: true
I0907 05:04:26.396890       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is bound to claim azuredisk-5194/pvc-24zwq
I0907 05:04:26.397078       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq not found
I0907 05:04:26.397229       1 pv_controller.go:1108] reclaimVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: policy is Delete
I0907 05:04:26.397410       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]]
I0907 05:04:26.397564       1 pv_controller.go:1764] operation "delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]" is already running, skipping
I0907 05:04:26.397902       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" with version 2213
I0907 05:04:26.397930       1 pv_controller.go:879] volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" entered phase "Failed"
I0907 05:04:26.397940       1 pv_controller.go:901] volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
E0907 05:04:26.398139       1 goroutinemap.go:150] Operation for "delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]" failed. No retries permitted until 2022-09-07 05:04:26.898116107 +0000 UTC m=+726.428518498 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:04:26.398513       1 event.go:291] "Event occurred" object="pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:04:29.319081       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 12 items received
I0907 05:04:29.786765       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 05:04:34.706132       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:04:34.720476       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:04:34.740868       1 gc_controller.go:161] GC'ing orphaned
... skipping 6 lines ...
I0907 05:04:34.844965       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is bound to claim azuredisk-5194/pvc-6g8f7
I0907 05:04:34.844997       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: claim azuredisk-5194/pvc-6g8f7 found: phase: Bound, bound to: "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2", bindCompleted: true, boundByController: true
I0907 05:04:34.845013       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: all is bound
I0907 05:04:34.845105       1 pv_controller.go:858] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: set phase Bound
I0907 05:04:34.845122       1 pv_controller.go:861] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: phase Bound already set
I0907 05:04:34.845143       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" with version 2213
I0907 05:04:34.845320       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase: Failed, bound to: "azuredisk-5194/pvc-24zwq (uid: 4a1fade5-64cb-480e-98b0-fa4435bf340e)", boundByController: true
I0907 05:04:34.845360       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is bound to claim azuredisk-5194/pvc-24zwq
I0907 05:04:34.845491       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq not found
I0907 05:04:34.845510       1 pv_controller.go:1108] reclaimVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: policy is Delete
I0907 05:04:34.845532       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]]
I0907 05:04:34.845677       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e] started
I0907 05:04:34.845956       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-6g8f7" with version 1862
... skipping 16 lines ...
I0907 05:04:34.852935       1 pv_controller.go:1405] doDeleteVolume [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]
I0907 05:04:34.867414       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-m4haq0-md-0-4c66f" succeeded. VolumesAttached: []
I0907 05:04:34.867827       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:04:34.869268       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 05:04:34.869575       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e to the node "capz-m4haq0-md-0-4c66f" mounted false
I0907 05:04:34.872967       1 operation_generator.go:1558] Verified volume is safe to detach for volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:04:34.885960       1 pv_controller.go:1260] deletion of volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:04:34.886260       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: set phase Failed
I0907 05:04:34.886417       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase Failed already set
E0907 05:04:34.886612       1 goroutinemap.go:150] Operation for "delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]" failed. No retries permitted until 2022-09-07 05:04:35.886575136 +0000 UTC m=+735.416977427 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:04:34.894012       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e from node "capz-m4haq0-md-0-4c66f"
I0907 05:04:34.964080       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e"
I0907 05:04:34.964126       1 azure_controller_standard.go:166] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e)
I0907 05:04:35.139614       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 05:04:35.574158       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="81.406µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:40386" resp=200
I0907 05:04:35.810379       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
... skipping 8 lines ...
I0907 05:04:49.845665       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: all is bound
I0907 05:04:49.845671       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7]: phase: Bound, bound to: "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2", bindCompleted: true, boundByController: true
I0907 05:04:49.845673       1 pv_controller.go:858] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: set phase Bound
I0907 05:04:49.845687       1 pv_controller.go:861] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: phase Bound already set
I0907 05:04:49.845702       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" with version 2213
I0907 05:04:49.845707       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7]: volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" found: phase: Bound, bound to: "azuredisk-5194/pvc-6g8f7 (uid: 7b368383-a34d-4e4a-9642-d950b76e9bf2)", boundByController: true
I0907 05:04:49.845722       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase: Failed, bound to: "azuredisk-5194/pvc-24zwq (uid: 4a1fade5-64cb-480e-98b0-fa4435bf340e)", boundByController: true
I0907 05:04:49.845734       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7]: claim is already correctly bound
I0907 05:04:49.845745       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is bound to claim azuredisk-5194/pvc-24zwq
I0907 05:04:49.845745       1 pv_controller.go:1012] binding volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" to claim "azuredisk-5194/pvc-6g8f7"
I0907 05:04:49.845758       1 pv_controller.go:910] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: binding to "azuredisk-5194/pvc-6g8f7"
I0907 05:04:49.845767       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq not found
I0907 05:04:49.845775       1 pv_controller.go:1108] reclaimVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: policy is Delete
... skipping 8 lines ...
I0907 05:04:49.845859       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7] status: phase Bound already set
I0907 05:04:49.845881       1 pv_controller.go:1038] volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" bound to claim "azuredisk-5194/pvc-6g8f7"
I0907 05:04:49.845899       1 pv_controller.go:1039] volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-6g8f7 (uid: 7b368383-a34d-4e4a-9642-d950b76e9bf2)", boundByController: true
I0907 05:04:49.845926       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-6g8f7" status after binding: phase: Bound, bound to: "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2", bindCompleted: true, boundByController: true
I0907 05:04:49.864515       1 pv_controller.go:1341] isVolumeReleased[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is released
I0907 05:04:49.864540       1 pv_controller.go:1405] doDeleteVolume [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]
I0907 05:04:49.864579       1 pv_controller.go:1260] deletion of volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) since it's in attaching or detaching state
I0907 05:04:49.864593       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: set phase Failed
I0907 05:04:49.864603       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase Failed already set
E0907 05:04:49.864641       1 goroutinemap.go:150] Operation for "delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]" failed. No retries permitted until 2022-09-07 05:04:51.864613504 +0000 UTC m=+751.395015895 (durationBeforeRetry 2s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) since it's in attaching or detaching state"
I0907 05:04:50.440373       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) returned with <nil>
I0907 05:04:50.440420       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e) succeeded
I0907 05:04:50.440431       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e was detached from node:capz-m4haq0-md-0-4c66f
I0907 05:04:50.440697       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:04:53.694504       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 19 items received
I0907 05:04:54.741311       1 gc_controller.go:161] GC'ing orphaned
... skipping 11 lines ...
I0907 05:05:04.846939       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is bound to claim azuredisk-5194/pvc-6g8f7
I0907 05:05:04.847037       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: claim azuredisk-5194/pvc-6g8f7 found: phase: Bound, bound to: "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2", bindCompleted: true, boundByController: true
I0907 05:05:04.847203       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: all is bound
I0907 05:05:04.847345       1 pv_controller.go:858] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: set phase Bound
I0907 05:05:04.847452       1 pv_controller.go:861] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: phase Bound already set
I0907 05:05:04.847519       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" with version 2213
I0907 05:05:04.847602       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase: Failed, bound to: "azuredisk-5194/pvc-24zwq (uid: 4a1fade5-64cb-480e-98b0-fa4435bf340e)", boundByController: true
I0907 05:05:04.847685       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is bound to claim azuredisk-5194/pvc-24zwq
I0907 05:05:04.847750       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq not found
I0907 05:05:04.847793       1 pv_controller.go:1108] reclaimVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: policy is Delete
I0907 05:05:04.847849       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]]
I0907 05:05:04.847907       1 pv_controller.go:1232] deleteVolumeOperation [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e] started
I0907 05:05:04.846843       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-6g8f7]: phase: Bound, bound to: "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2", bindCompleted: true, boundByController: true
... skipping 20 lines ...
I0907 05:05:10.096571       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e
I0907 05:05:10.096610       1 pv_controller.go:1436] volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" deleted
I0907 05:05:10.096624       1 pv_controller.go:1284] deleteVolumeOperation [pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: success
I0907 05:05:10.107559       1 pv_protection_controller.go:205] Got event on PV pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e
I0907 05:05:10.107597       1 pv_protection_controller.go:125] Processing PV pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e
I0907 05:05:10.107917       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e" with version 2278
I0907 05:05:10.107951       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: phase: Failed, bound to: "azuredisk-5194/pvc-24zwq (uid: 4a1fade5-64cb-480e-98b0-fa4435bf340e)", boundByController: true
I0907 05:05:10.107979       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: volume is bound to claim azuredisk-5194/pvc-24zwq
I0907 05:05:10.107999       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: claim azuredisk-5194/pvc-24zwq not found
I0907 05:05:10.108008       1 pv_controller.go:1108] reclaimVolume[pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e]: policy is Delete
I0907 05:05:10.108024       1 pv_controller.go:1753] scheduleOperation[delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]]
I0907 05:05:10.108032       1 pv_controller.go:1764] operation "delete-pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e[a3c44085-dd13-407e-80f3-c7a2d800a916]" is already running, skipping
I0907 05:05:10.115909       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-4a1fade5-64cb-480e-98b0-fa4435bf340e
... skipping 183 lines ...
I0907 05:05:51.572221       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: claim azuredisk-5194/pvc-6g8f7 not found
I0907 05:05:51.572260       1 pv_controller.go:1108] reclaimVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: policy is Delete
I0907 05:05:51.572282       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]]
I0907 05:05:51.572291       1 pv_controller.go:1764] operation "delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]" is already running, skipping
I0907 05:05:51.574014       1 pv_controller.go:1341] isVolumeReleased[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is released
I0907 05:05:51.574036       1 pv_controller.go:1405] doDeleteVolume [pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]
I0907 05:05:51.574095       1 pv_controller.go:1260] deletion of volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2) since it's in attaching or detaching state
I0907 05:05:51.574132       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: set phase Failed
I0907 05:05:51.574173       1 pv_controller.go:858] updating PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: set phase Failed
I0907 05:05:51.577538       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" with version 2351
I0907 05:05:51.577764       1 pv_controller.go:879] volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" entered phase "Failed"
I0907 05:05:51.578472       1 pv_controller.go:901] volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2) since it's in attaching or detaching state
I0907 05:05:51.578412       1 pv_protection_controller.go:205] Got event on PV pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2
I0907 05:05:51.578428       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" with version 2351
I0907 05:05:51.578707       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: phase: Failed, bound to: "azuredisk-5194/pvc-6g8f7 (uid: 7b368383-a34d-4e4a-9642-d950b76e9bf2)", boundByController: true
I0907 05:05:51.578797       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is bound to claim azuredisk-5194/pvc-6g8f7
I0907 05:05:51.578902       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: claim azuredisk-5194/pvc-6g8f7 not found
I0907 05:05:51.578982       1 pv_controller.go:1108] reclaimVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: policy is Delete
I0907 05:05:51.579061       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]]
E0907 05:05:51.578712       1 goroutinemap.go:150] Operation for "delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]" failed. No retries permitted until 2022-09-07 05:05:52.078504456 +0000 UTC m=+811.608906747 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2) since it's in attaching or detaching state"
I0907 05:05:51.579222       1 pv_controller.go:1766] operation "delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]" postponed due to exponential backoff
I0907 05:05:51.579321       1 event.go:291] "Event occurred" object="pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2) since it's in attaching or detaching state"
I0907 05:05:51.687640       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0907 05:05:52.684848       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 88 items received
I0907 05:05:54.743468       1 gc_controller.go:161] GC'ing orphaned
I0907 05:05:54.743513       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 05:05:54.755613       1 controller.go:272] Triggering nodeSync
I0907 05:05:54.755648       1 controller.go:291] nodeSync has been triggered
... skipping 10 lines ...
I0907 05:05:58.697209       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 62 items received
I0907 05:05:58.728548       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 0 items received
I0907 05:06:04.707444       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:06:04.723168       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:06:04.849120       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:06:04.849366       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" with version 2351
I0907 05:06:04.849473       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: phase: Failed, bound to: "azuredisk-5194/pvc-6g8f7 (uid: 7b368383-a34d-4e4a-9642-d950b76e9bf2)", boundByController: true
I0907 05:06:04.849581       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is bound to claim azuredisk-5194/pvc-6g8f7
I0907 05:06:04.849714       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: claim azuredisk-5194/pvc-6g8f7 not found
I0907 05:06:04.849772       1 pv_controller.go:1108] reclaimVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: policy is Delete
I0907 05:06:04.849813       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]]
I0907 05:06:04.849865       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2] started
I0907 05:06:04.861703       1 pv_controller.go:1341] isVolumeReleased[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is released
... skipping 3 lines ...
I0907 05:06:10.108658       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2
I0907 05:06:10.108698       1 pv_controller.go:1436] volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" deleted
I0907 05:06:10.108713       1 pv_controller.go:1284] deleteVolumeOperation [pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: success
I0907 05:06:10.120760       1 pv_protection_controller.go:205] Got event on PV pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2
I0907 05:06:10.121450       1 pv_protection_controller.go:125] Processing PV pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2
I0907 05:06:10.121862       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" with version 2381
I0907 05:06:10.121930       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: phase: Failed, bound to: "azuredisk-5194/pvc-6g8f7 (uid: 7b368383-a34d-4e4a-9642-d950b76e9bf2)", boundByController: true
I0907 05:06:10.121968       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: volume is bound to claim azuredisk-5194/pvc-6g8f7
I0907 05:06:10.122016       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: claim azuredisk-5194/pvc-6g8f7 not found
I0907 05:06:10.122026       1 pv_controller.go:1108] reclaimVolume[pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2]: policy is Delete
I0907 05:06:10.122043       1 pv_controller.go:1753] scheduleOperation[delete-pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2[227345c5-1228-4db0-9639-9c8514287703]]
I0907 05:06:10.122093       1 pv_controller.go:1232] deleteVolumeOperation [pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2] started
I0907 05:06:10.126307       1 pv_controller.go:1244] Volume "pvc-7b368383-a34d-4e4a-9642-d950b76e9bf2" is already being deleted
... skipping 35 lines ...
I0907 05:06:11.746908       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-lnhvj-5545bd46b9-nw8ff"
I0907 05:06:11.746979       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj"
I0907 05:06:11.763231       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9" (28.70977ms)
I0907 05:06:11.763269       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9", timestamp:time.Time{wall:0xc0be2790ebc8863a, ext:831264964141, loc:(*time.Location)(0x731ea80)}}
I0907 05:06:11.763471       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0907 05:06:11.763919       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj" duration="35.152633ms"
I0907 05:06:11.764094       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-lnhvj\": the object has been modified; please apply your changes to the latest version and try again"
I0907 05:06:11.764234       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj" startTime="2022-09-07 05:06:11.764210186 +0000 UTC m=+831.294612477"
I0907 05:06:11.764742       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-lnhvj" timed out (false) [last progress check: 2022-09-07 05:06:11 +0000 UTC - now: 2022-09-07 05:06:11.764733424 +0000 UTC m=+831.295135815]
I0907 05:06:11.765432       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-4rm84"
I0907 05:06:11.765700       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-4rm84" with version 2398
I0907 05:06:11.765994       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-4rm84]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0907 05:06:11.766145       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-4rm84]: no volume found
... skipping 106 lines ...
I0907 05:06:14.863394       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" to node "capz-m4haq0-md-0-7czdt".
I0907 05:06:14.898003       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" lun 0 to node "capz-m4haq0-md-0-7czdt".
I0907 05:06:14.898057       1 azure_controller_standard.go:93] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - attach disk(capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) with DiskEncryptionSetID()
I0907 05:06:15.574437       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="68.105µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:48456" resp=200
I0907 05:06:16.188487       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5194
I0907 05:06:16.246355       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-nbhwb, uid 8d0f70fa-72f4-4171-96d2-2d02a88eab4d, event type delete
E0907 05:06:16.321492       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-jnpdj" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0907 05:06:16.358498       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name azuredisk-volume-tester-2kwhv.17127b663e54cc97, uid b75b408c-2a8d-4316-9288-6af108edd606, event type delete
I0907 05:06:16.378608       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name azuredisk-volume-tester-2kwhv.17127b68bf57f4ef, uid 70d9d0a0-e5c0-40ad-8397-64c6d0f05b1f, event type delete
I0907 05:06:16.386862       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name azuredisk-volume-tester-2kwhv.17127b6aae1fb611, uid 2aef4f2a-0cd9-4326-92c3-1bd5c3a53a6f, event type delete
I0907 05:06:16.421262       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name azuredisk-volume-tester-2kwhv.17127b6ab21d9e23, uid 35b864e3-e01f-4205-a8f6-ddfe9086a7e0, event type delete
I0907 05:06:16.425499       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name azuredisk-volume-tester-2kwhv.17127b6ab9979a41, uid fa140cd2-2e4e-4ec5-bf8a-e75b6e29784e, event type delete
I0907 05:06:16.480227       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5194, name azuredisk-volume-tester-2kwhv.17127b99358cdca4, uid 146deb49-5c5f-4a0e-a8e7-3bc31104c516, event type delete
... skipping 171 lines ...
I0907 05:06:36.608756       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz, PodDisruptionBudget controller will avoid syncing.
I0907 05:06:36.608918       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz"
I0907 05:06:36.612742       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9" (23.829302ms)
I0907 05:06:36.613170       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9", timestamp:time.Time{wall:0xc0be279721346fe3, ext:856087487034, loc:(*time.Location)(0x731ea80)}}
I0907 05:06:36.613410       1 controller_utils.go:972] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9-nw8ff in state Running, deletion time 2022-09-07 05:07:06 +0000 UTC
I0907 05:06:36.613612       1 replica_set.go:649] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9" (451.432µs)
W0907 05:06:36.621645       1 reconciler.go:385] Multi-Attach error for volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2") from node "capz-m4haq0-md-0-4c66f" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9-nw8ff on node capz-m4haq0-md-0-7czdt
I0907 05:06:36.621879       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2\" Volume is already used by pod(s) azuredisk-volume-tester-lnhvj-5545bd46b9-nw8ff"
I0907 05:06:39.842394       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 05:06:44.986555       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 05:06:45.168044       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 05:06:45.574597       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="81.906µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:49974" resp=200
I0907 05:06:49.358417       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 05:06:49.724055       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 533 lines ...
I0907 05:09:34.062954       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: claim azuredisk-1353/pvc-4rm84 not found
I0907 05:09:34.062968       1 pv_controller.go:1108] reclaimVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: policy is Delete
I0907 05:09:34.062982       1 pv_controller.go:1753] scheduleOperation[delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]]
I0907 05:09:34.063013       1 pv_controller.go:1764] operation "delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]" is already running, skipping
I0907 05:09:34.064804       1 pv_controller.go:1341] isVolumeReleased[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is released
I0907 05:09:34.064822       1 pv_controller.go:1405] doDeleteVolume [pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]
I0907 05:09:34.089735       1 pv_controller.go:1260] deletion of volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:09:34.089763       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: set phase Failed
I0907 05:09:34.089776       1 pv_controller.go:858] updating PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: set phase Failed
I0907 05:09:34.093832       1 pv_protection_controller.go:205] Got event on PV pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2
I0907 05:09:34.094423       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" with version 2786
I0907 05:09:34.094528       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: phase: Failed, bound to: "azuredisk-1353/pvc-4rm84 (uid: 730d8537-eefe-4dbe-b321-ef8c3b8bcac2)", boundByController: true
I0907 05:09:34.094602       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is bound to claim azuredisk-1353/pvc-4rm84
I0907 05:09:34.094628       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: claim azuredisk-1353/pvc-4rm84 not found
I0907 05:09:34.094692       1 pv_controller.go:1108] reclaimVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: policy is Delete
I0907 05:09:34.094726       1 pv_controller.go:1753] scheduleOperation[delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]]
I0907 05:09:34.094777       1 pv_controller.go:1764] operation "delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]" is already running, skipping
I0907 05:09:34.095429       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" with version 2786
I0907 05:09:34.095581       1 pv_controller.go:879] volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" entered phase "Failed"
I0907 05:09:34.095600       1 pv_controller.go:901] volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:09:34.096052       1 event.go:291] "Event occurred" object="pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
E0907 05:09:34.096237       1 goroutinemap.go:150] Operation for "delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]" failed. No retries permitted until 2022-09-07 05:09:34.596202219 +0000 UTC m=+1034.126604610 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:09:34.714284       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:09:34.729469       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:09:34.750859       1 gc_controller.go:161] GC'ing orphaned
I0907 05:09:34.750898       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 05:09:34.859910       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:09:34.859997       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" with version 2786
I0907 05:09:34.860064       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: phase: Failed, bound to: "azuredisk-1353/pvc-4rm84 (uid: 730d8537-eefe-4dbe-b321-ef8c3b8bcac2)", boundByController: true
I0907 05:09:34.860106       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is bound to claim azuredisk-1353/pvc-4rm84
I0907 05:09:34.860134       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: claim azuredisk-1353/pvc-4rm84 not found
I0907 05:09:34.860148       1 pv_controller.go:1108] reclaimVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: policy is Delete
I0907 05:09:34.860166       1 pv_controller.go:1753] scheduleOperation[delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]]
I0907 05:09:34.860216       1 pv_controller.go:1232] deleteVolumeOperation [pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2] started
I0907 05:09:34.865907       1 pv_controller.go:1341] isVolumeReleased[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is released
I0907 05:09:34.865947       1 pv_controller.go:1405] doDeleteVolume [pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]
I0907 05:09:34.890656       1 pv_controller.go:1260] deletion of volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:09:34.890856       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: set phase Failed
I0907 05:09:34.890973       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: phase Failed already set
E0907 05:09:34.891106       1 goroutinemap.go:150] Operation for "delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]" failed. No retries permitted until 2022-09-07 05:09:35.891073528 +0000 UTC m=+1035.421475919 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:09:35.154303       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
I0907 05:09:35.154345       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 to the node "capz-m4haq0-md-0-4c66f" mounted false
I0907 05:09:35.198779       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 05:09:35.199444       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-m4haq0-md-0-4c66f" succeeded. VolumesAttached: []
I0907 05:09:35.199573       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:09:35.199931       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-4c66f"
... skipping 15 lines ...
I0907 05:09:45.698908       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2) succeeded
I0907 05:09:45.698921       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2 was detached from node:capz-m4haq0-md-0-4c66f
I0907 05:09:45.698949       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:09:49.729920       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:09:49.860923       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:09:49.861011       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" with version 2786
I0907 05:09:49.861043       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: phase: Failed, bound to: "azuredisk-1353/pvc-4rm84 (uid: 730d8537-eefe-4dbe-b321-ef8c3b8bcac2)", boundByController: true
I0907 05:09:49.861068       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is bound to claim azuredisk-1353/pvc-4rm84
I0907 05:09:49.861082       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: claim azuredisk-1353/pvc-4rm84 not found
I0907 05:09:49.861088       1 pv_controller.go:1108] reclaimVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: policy is Delete
I0907 05:09:49.861102       1 pv_controller.go:1753] scheduleOperation[delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]]
I0907 05:09:49.861135       1 pv_controller.go:1232] deleteVolumeOperation [pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2] started
I0907 05:09:49.874841       1 pv_controller.go:1341] isVolumeReleased[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is released
... skipping 3 lines ...
I0907 05:09:55.094455       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2
I0907 05:09:55.094494       1 pv_controller.go:1436] volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" deleted
I0907 05:09:55.094508       1 pv_controller.go:1284] deleteVolumeOperation [pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: success
I0907 05:09:55.101971       1 pv_protection_controller.go:205] Got event on PV pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2
I0907 05:09:55.102281       1 pv_protection_controller.go:125] Processing PV pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2
I0907 05:09:55.102765       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2" with version 2821
I0907 05:09:55.102806       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: phase: Failed, bound to: "azuredisk-1353/pvc-4rm84 (uid: 730d8537-eefe-4dbe-b321-ef8c3b8bcac2)", boundByController: true
I0907 05:09:55.102835       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: volume is bound to claim azuredisk-1353/pvc-4rm84
I0907 05:09:55.102983       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: claim azuredisk-1353/pvc-4rm84 not found
I0907 05:09:55.103002       1 pv_controller.go:1108] reclaimVolume[pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2]: policy is Delete
I0907 05:09:55.103019       1 pv_controller.go:1753] scheduleOperation[delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]]
I0907 05:09:55.103028       1 pv_controller.go:1764] operation "delete-pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2[9acc7a67-688f-4755-a0c0-0d356dbb97b9]" is already running, skipping
I0907 05:09:55.108158       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-730d8537-eefe-4dbe-b321-ef8c3b8bcac2
... skipping 145 lines ...
I0907 05:10:02.742078       1 pv_controller.go:1764] operation "delete-pvc-0fe6352a-ef8f-4ac3-a8b9-620e3b3c3033[e924cba2-036d-4baa-a535-dd3b304a2be5]" is already running, skipping
I0907 05:10:02.742136       1 pv_controller.go:1232] deleteVolumeOperation [pvc-0fe6352a-ef8f-4ac3-a8b9-620e3b3c3033] started
I0907 05:10:02.743951       1 pv_controller.go:1341] isVolumeReleased[pvc-0fe6352a-ef8f-4ac3-a8b9-620e3b3c3033]: volume is released
I0907 05:10:02.743971       1 pv_controller.go:1405] doDeleteVolume [pvc-0fe6352a-ef8f-4ac3-a8b9-620e3b3c3033]
I0907 05:10:03.039209       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1353
I0907 05:10:03.082598       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-8q6bp, uid be0fd4db-dd3b-4f9d-b779-5406c66fa46e, event type delete
E0907 05:10:03.099665       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-hslzd" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0907 05:10:03.140906       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz.17127bad45c675f1, uid c142d467-24b7-425d-bb4e-b1444995b939, event type delete
I0907 05:10:03.150483       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz.17127bad480e507a, uid 6d4cded8-017c-47d5-9111-399e242f9dcf, event type delete
I0907 05:10:03.155483       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz.17127bc049f379ef, uid 8061b9d6-0a87-46dc-8e44-79c72f0985e9, event type delete
I0907 05:10:03.158849       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz.17127bc9ea5bbae0, uid 45b62202-c479-4297-8cf3-4ae3c1a5796e, event type delete
I0907 05:10:03.162507       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz.17127bcd5c15f228, uid 5ba410db-1d7c-4947-ba57-b03380d850ca, event type delete
I0907 05:10:03.170620       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-lnhvj-5545bd46b9-7x9zz.17127bcd5ed30a73, uid 2ef8f664-58da-4978-aace-9a3d93269be8, event type delete
... skipping 904 lines ...
I0907 05:11:26.403586       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:11:26.403599       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:11:26.403608       1 pv_controller.go:1764] operation "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" is already running, skipping
I0907 05:11:26.403445       1 pv_controller.go:1232] deleteVolumeOperation [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83] started
I0907 05:11:26.407560       1 pv_controller.go:1341] isVolumeReleased[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is released
I0907 05:11:26.407586       1 pv_controller.go:1405] doDeleteVolume [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]
I0907 05:11:26.434265       1 pv_controller.go:1260] deletion of volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:11:26.434304       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: set phase Failed
I0907 05:11:26.434315       1 pv_controller.go:858] updating PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: set phase Failed
I0907 05:11:26.439026       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:11:26.439064       1 pv_controller.go:879] volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" entered phase "Failed"
I0907 05:11:26.439350       1 pv_protection_controller.go:205] Got event on PV pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83
I0907 05:11:26.439403       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:11:26.439433       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:11:26.439539       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:11:26.439606       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:11:26.439653       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:11:26.439729       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:11:26.439821       1 pv_controller.go:1764] operation "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" is already running, skipping
I0907 05:11:26.439852       1 pv_controller.go:901] volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
E0907 05:11:26.439948       1 goroutinemap.go:150] Operation for "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" failed. No retries permitted until 2022-09-07 05:11:26.939888914 +0000 UTC m=+1146.470291305 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:11:26.440062       1 event.go:291] "Event occurred" object="pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:11:32.077903       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 05:11:32.077941       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 to the node "capz-m4haq0-md-0-7czdt" mounted false
I0907 05:11:32.077953       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-b78680ee-707f-4e01-9043-2b1650d055ce to the node "capz-m4haq0-md-0-7czdt" mounted false
I0907 05:11:32.077962       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-320da68d-deeb-41d5-a865-ee997ef92a09 to the node "capz-m4haq0-md-0-7czdt" mounted false
I0907 05:11:32.091994       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
... skipping 40 lines ...
I0907 05:11:34.865322       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: volume is bound to claim azuredisk-59/pvc-wxv4l
I0907 05:11:34.865346       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: claim azuredisk-59/pvc-wxv4l found: phase: Bound, bound to: "pvc-320da68d-deeb-41d5-a865-ee997ef92a09", bindCompleted: true, boundByController: true
I0907 05:11:34.865395       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: all is bound
I0907 05:11:34.865412       1 pv_controller.go:858] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: set phase Bound
I0907 05:11:34.865421       1 pv_controller.go:861] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: phase Bound already set
I0907 05:11:34.865475       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:11:34.865506       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:11:34.865584       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:11:34.865663       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:11:34.865683       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:11:34.865759       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:11:34.865835       1 pv_controller.go:1232] deleteVolumeOperation [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83] started
I0907 05:11:34.866177       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-kg7rb" with version 2955
... skipping 27 lines ...
I0907 05:11:34.866733       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-wxv4l] status: phase Bound already set
I0907 05:11:34.866806       1 pv_controller.go:1038] volume "pvc-320da68d-deeb-41d5-a865-ee997ef92a09" bound to claim "azuredisk-59/pvc-wxv4l"
I0907 05:11:34.866828       1 pv_controller.go:1039] volume "pvc-320da68d-deeb-41d5-a865-ee997ef92a09" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-wxv4l (uid: 320da68d-deeb-41d5-a865-ee997ef92a09)", boundByController: true
I0907 05:11:34.866851       1 pv_controller.go:1040] claim "azuredisk-59/pvc-wxv4l" status after binding: phase: Bound, bound to: "pvc-320da68d-deeb-41d5-a865-ee997ef92a09", bindCompleted: true, boundByController: true
I0907 05:11:34.872312       1 pv_controller.go:1341] isVolumeReleased[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is released
I0907 05:11:34.872333       1 pv_controller.go:1405] doDeleteVolume [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]
I0907 05:11:34.895650       1 pv_controller.go:1260] deletion of volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:11:34.895675       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: set phase Failed
I0907 05:11:34.895686       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase Failed already set
E0907 05:11:34.895726       1 goroutinemap.go:150] Operation for "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" failed. No retries permitted until 2022-09-07 05:11:35.895695833 +0000 UTC m=+1155.426098224 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:11:35.226160       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-7czdt ReadyCondition updated. Updating timestamp.
I0907 05:11:35.573585       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="77.305µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:45236" resp=200
I0907 05:11:36.188234       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:11:38.687286       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 79 items received
I0907 05:11:45.574412       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="83.506µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:52312" resp=200
I0907 05:11:49.734783       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 28 lines ...
I0907 05:11:49.866929       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-59/pvc-kg7rb] status: set phase Bound
I0907 05:11:49.867002       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-kg7rb] status: phase Bound already set
I0907 05:11:49.867138       1 pv_controller.go:1038] volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" bound to claim "azuredisk-59/pvc-kg7rb"
I0907 05:11:49.867173       1 pv_controller.go:1039] volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-kg7rb (uid: b78680ee-707f-4e01-9043-2b1650d055ce)", boundByController: true
I0907 05:11:49.867194       1 pv_controller.go:1040] claim "azuredisk-59/pvc-kg7rb" status after binding: phase: Bound, bound to: "pvc-b78680ee-707f-4e01-9043-2b1650d055ce", bindCompleted: true, boundByController: true
I0907 05:11:49.867235       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:11:49.867286       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:11:49.867315       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:11:49.867339       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:11:49.867352       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:11:49.867370       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:11:49.867407       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" with version 2952
I0907 05:11:49.867429       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b78680ee-707f-4e01-9043-2b1650d055ce]: phase: Bound, bound to: "azuredisk-59/pvc-kg7rb (uid: b78680ee-707f-4e01-9043-2b1650d055ce)", boundByController: true
... skipping 9 lines ...
I0907 05:11:49.868115       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: claim azuredisk-59/pvc-wxv4l found: phase: Bound, bound to: "pvc-320da68d-deeb-41d5-a865-ee997ef92a09", bindCompleted: true, boundByController: true
I0907 05:11:49.868173       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: all is bound
I0907 05:11:49.868216       1 pv_controller.go:858] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: set phase Bound
I0907 05:11:49.868335       1 pv_controller.go:861] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: phase Bound already set
I0907 05:11:49.883934       1 pv_controller.go:1341] isVolumeReleased[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is released
I0907 05:11:49.883956       1 pv_controller.go:1405] doDeleteVolume [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]
I0907 05:11:49.908454       1 pv_controller.go:1260] deletion of volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:11:49.908499       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: set phase Failed
I0907 05:11:49.908517       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase Failed already set
E0907 05:11:49.908701       1 goroutinemap.go:150] Operation for "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" failed. No retries permitted until 2022-09-07 05:11:51.908575516 +0000 UTC m=+1171.438977907 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:11:52.641370       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-320da68d-deeb-41d5-a865-ee997ef92a09) returned with <nil>
I0907 05:11:52.641427       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-320da68d-deeb-41d5-a865-ee997ef92a09) succeeded
I0907 05:11:52.641440       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-320da68d-deeb-41d5-a865-ee997ef92a09 was detached from node:capz-m4haq0-md-0-7czdt
I0907 05:11:52.641765       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-320da68d-deeb-41d5-a865-ee997ef92a09" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-320da68d-deeb-41d5-a865-ee997ef92a09") on node "capz-m4haq0-md-0-7czdt" 
I0907 05:11:52.696967       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-b78680ee-707f-4e01-9043-2b1650d055ce"
I0907 05:11:52.697003       1 azure_controller_standard.go:166] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-b78680ee-707f-4e01-9043-2b1650d055ce)
... skipping 42 lines ...
I0907 05:12:04.867541       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: volume is bound to claim azuredisk-59/pvc-wxv4l
I0907 05:12:04.867559       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: claim azuredisk-59/pvc-wxv4l found: phase: Bound, bound to: "pvc-320da68d-deeb-41d5-a865-ee997ef92a09", bindCompleted: true, boundByController: true
I0907 05:12:04.867578       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: all is bound
I0907 05:12:04.867588       1 pv_controller.go:858] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: set phase Bound
I0907 05:12:04.867598       1 pv_controller.go:861] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: phase Bound already set
I0907 05:12:04.867617       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:12:04.867641       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:12:04.867662       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:12:04.867686       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:12:04.867696       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:12:04.867714       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:12:04.867743       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" with version 2952
I0907 05:12:04.867763       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b78680ee-707f-4e01-9043-2b1650d055ce]: phase: Bound, bound to: "azuredisk-59/pvc-kg7rb (uid: b78680ee-707f-4e01-9043-2b1650d055ce)", boundByController: true
... skipping 2 lines ...
I0907 05:12:04.867819       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-b78680ee-707f-4e01-9043-2b1650d055ce]: all is bound
I0907 05:12:04.867828       1 pv_controller.go:858] updating PersistentVolume[pvc-b78680ee-707f-4e01-9043-2b1650d055ce]: set phase Bound
I0907 05:12:04.867838       1 pv_controller.go:861] updating PersistentVolume[pvc-b78680ee-707f-4e01-9043-2b1650d055ce]: phase Bound already set
I0907 05:12:04.867864       1 pv_controller.go:1232] deleteVolumeOperation [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83] started
I0907 05:12:04.873733       1 pv_controller.go:1341] isVolumeReleased[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is released
I0907 05:12:04.873759       1 pv_controller.go:1405] doDeleteVolume [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]
I0907 05:12:04.897155       1 pv_controller.go:1260] deletion of volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:12:04.897181       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: set phase Failed
I0907 05:12:04.897191       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase Failed already set
E0907 05:12:04.897233       1 goroutinemap.go:150] Operation for "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" failed. No retries permitted until 2022-09-07 05:12:08.897201367 +0000 UTC m=+1188.427603758 (durationBeforeRetry 4s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:12:05.574705       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="66.404µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:59216" resp=200
I0907 05:12:06.206564       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:12:07.168707       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0907 05:12:13.149054       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-b78680ee-707f-4e01-9043-2b1650d055ce) returned with <nil>
I0907 05:12:13.149111       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-b78680ee-707f-4e01-9043-2b1650d055ce) succeeded
I0907 05:12:13.149125       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-b78680ee-707f-4e01-9043-2b1650d055ce was detached from node:capz-m4haq0-md-0-7czdt
... skipping 12 lines ...
I0907 05:12:19.867200       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: volume is bound to claim azuredisk-59/pvc-wxv4l
I0907 05:12:19.867222       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: claim azuredisk-59/pvc-wxv4l found: phase: Bound, bound to: "pvc-320da68d-deeb-41d5-a865-ee997ef92a09", bindCompleted: true, boundByController: true
I0907 05:12:19.867238       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: all is bound
I0907 05:12:19.867256       1 pv_controller.go:858] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: set phase Bound
I0907 05:12:19.867269       1 pv_controller.go:861] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: phase Bound already set
I0907 05:12:19.867290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:12:19.867317       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:12:19.867339       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:12:19.867360       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:12:19.867369       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:12:19.867388       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:12:19.867408       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" with version 2952
I0907 05:12:19.867435       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b78680ee-707f-4e01-9043-2b1650d055ce]: phase: Bound, bound to: "azuredisk-59/pvc-kg7rb (uid: b78680ee-707f-4e01-9043-2b1650d055ce)", boundByController: true
... skipping 34 lines ...
I0907 05:12:19.870711       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-kg7rb] status: phase Bound already set
I0907 05:12:19.870725       1 pv_controller.go:1038] volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" bound to claim "azuredisk-59/pvc-kg7rb"
I0907 05:12:19.870748       1 pv_controller.go:1039] volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-kg7rb (uid: b78680ee-707f-4e01-9043-2b1650d055ce)", boundByController: true
I0907 05:12:19.870767       1 pv_controller.go:1040] claim "azuredisk-59/pvc-kg7rb" status after binding: phase: Bound, bound to: "pvc-b78680ee-707f-4e01-9043-2b1650d055ce", bindCompleted: true, boundByController: true
I0907 05:12:19.884457       1 pv_controller.go:1341] isVolumeReleased[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is released
I0907 05:12:19.884481       1 pv_controller.go:1405] doDeleteVolume [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]
I0907 05:12:19.884548       1 pv_controller.go:1260] deletion of volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) since it's in attaching or detaching state
I0907 05:12:19.884562       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: set phase Failed
I0907 05:12:19.884572       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase Failed already set
E0907 05:12:19.884654       1 goroutinemap.go:150] Operation for "delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]" failed. No retries permitted until 2022-09-07 05:12:27.884581014 +0000 UTC m=+1207.414983405 (durationBeforeRetry 8s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) since it's in attaching or detaching state"
I0907 05:12:23.844637       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 05:12:24.727567       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 11 items received
I0907 05:12:25.575821       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="87.906µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:41312" resp=200
I0907 05:12:28.547567       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) returned with <nil>
I0907 05:12:28.547618       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83) succeeded
I0907 05:12:28.547634       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 was detached from node:capz-m4haq0-md-0-7czdt
... skipping 18 lines ...
I0907 05:12:34.867326       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-wxv4l" with version 2945
I0907 05:12:34.867328       1 pv_controller.go:858] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: set phase Bound
I0907 05:12:34.867344       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-59/pvc-wxv4l]: phase: Bound, bound to: "pvc-320da68d-deeb-41d5-a865-ee997ef92a09", bindCompleted: true, boundByController: true
I0907 05:12:34.867344       1 pv_controller.go:861] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: phase Bound already set
I0907 05:12:34.867362       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3097
I0907 05:12:34.867376       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-wxv4l]: volume "pvc-320da68d-deeb-41d5-a865-ee997ef92a09" found: phase: Bound, bound to: "azuredisk-59/pvc-wxv4l (uid: 320da68d-deeb-41d5-a865-ee997ef92a09)", boundByController: true
I0907 05:12:34.867380       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:12:34.867389       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-wxv4l]: claim is already correctly bound
I0907 05:12:34.867398       1 pv_controller.go:1012] binding volume "pvc-320da68d-deeb-41d5-a865-ee997ef92a09" to claim "azuredisk-59/pvc-wxv4l"
I0907 05:12:34.867406       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:12:34.867408       1 pv_controller.go:910] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: binding to "azuredisk-59/pvc-wxv4l"
I0907 05:12:34.867427       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:12:34.867427       1 pv_controller.go:922] updating PersistentVolume[pvc-320da68d-deeb-41d5-a865-ee997ef92a09]: already bound to "azuredisk-59/pvc-wxv4l"
... skipping 41 lines ...
I0907 05:12:40.268776       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83
I0907 05:12:40.268815       1 pv_controller.go:1436] volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" deleted
I0907 05:12:40.269005       1 pv_controller.go:1284] deleteVolumeOperation [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: success
I0907 05:12:40.280482       1 pv_protection_controller.go:205] Got event on PV pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83
I0907 05:12:40.280519       1 pv_protection_controller.go:125] Processing PV pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83
I0907 05:12:40.281044       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" with version 3208
I0907 05:12:40.283166       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: phase: Failed, bound to: "azuredisk-59/pvc-2z5d9 (uid: 977b1bc8-c9e7-4d78-a765-e87a72878f83)", boundByController: true
I0907 05:12:40.283212       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: volume is bound to claim azuredisk-59/pvc-2z5d9
I0907 05:12:40.283234       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: claim azuredisk-59/pvc-2z5d9 not found
I0907 05:12:40.283244       1 pv_controller.go:1108] reclaimVolume[pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83]: policy is Delete
I0907 05:12:40.283263       1 pv_controller.go:1753] scheduleOperation[delete-pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83[1bee486b-5ccf-4f98-8860-5259743f525c]]
I0907 05:12:40.283461       1 pv_controller.go:1232] deleteVolumeOperation [pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83] started
I0907 05:12:40.287445       1 pv_controller_base.go:235] volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" deleted
I0907 05:12:40.287679       1 pv_controller_base.go:505] deletion of claim "azuredisk-59/pvc-2z5d9" was already processed
I0907 05:12:40.288003       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83
I0907 05:12:40.288145       1 pv_protection_controller.go:128] Finished processing PV pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83 (7.615645ms)
I0907 05:12:40.289070       1 pv_controller.go:1239] error reading persistent volume "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83": persistentvolumes "pvc-977b1bc8-c9e7-4d78-a765-e87a72878f83" not found
I0907 05:12:42.027434       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-59/pvc-kg7rb"
I0907 05:12:42.027727       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-kg7rb" with version 3213
I0907 05:12:42.027771       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-59/pvc-kg7rb]: phase: Bound, bound to: "pvc-b78680ee-707f-4e01-9043-2b1650d055ce", bindCompleted: true, boundByController: true
I0907 05:12:42.027916       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-kg7rb]: volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" found: phase: Bound, bound to: "azuredisk-59/pvc-kg7rb (uid: b78680ee-707f-4e01-9043-2b1650d055ce)", boundByController: true
I0907 05:12:42.027950       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-kg7rb]: claim is already correctly bound
I0907 05:12:42.028075       1 pv_controller.go:1012] binding volume "pvc-b78680ee-707f-4e01-9043-2b1650d055ce" to claim "azuredisk-59/pvc-kg7rb"
... skipping 611 lines ...
I0907 05:13:41.915311       1 pv_protection_controller.go:205] Got event on PV pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a
I0907 05:13:41.915824       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a] started
I0907 05:13:41.916511       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]]
I0907 05:13:41.916529       1 pv_controller.go:1764] operation "delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]" is already running, skipping
I0907 05:13:41.919497       1 pv_controller.go:1341] isVolumeReleased[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is released
I0907 05:13:41.919518       1 pv_controller.go:1405] doDeleteVolume [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]
I0907 05:13:41.944619       1 pv_controller.go:1260] deletion of volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:13:41.944868       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: set phase Failed
I0907 05:13:41.944891       1 pv_controller.go:858] updating PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: set phase Failed
I0907 05:13:41.948643       1 pv_protection_controller.go:205] Got event on PV pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a
I0907 05:13:41.948870       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3380
I0907 05:13:41.948951       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase: Failed, bound to: "azuredisk-2546/pvc-glgk8 (uid: c2e708c6-5f8c-4dae-85e6-6c6935b2607a)", boundByController: true
I0907 05:13:41.949032       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is bound to claim azuredisk-2546/pvc-glgk8
I0907 05:13:41.949076       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: claim azuredisk-2546/pvc-glgk8 not found
I0907 05:13:41.949125       1 pv_controller.go:1108] reclaimVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: policy is Delete
I0907 05:13:41.949148       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]]
I0907 05:13:41.949174       1 pv_controller.go:1764] operation "delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]" is already running, skipping
I0907 05:13:41.949730       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3380
I0907 05:13:41.949760       1 pv_controller.go:879] volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" entered phase "Failed"
I0907 05:13:41.949770       1 pv_controller.go:901] volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
E0907 05:13:41.949890       1 goroutinemap.go:150] Operation for "delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]" failed. No retries permitted until 2022-09-07 05:13:42.449836629 +0000 UTC m=+1281.980239020 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:13:41.950107       1 event.go:291] "Event occurred" object="pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:13:45.574008       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="61.305µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:37460" resp=200
I0907 05:13:47.730966       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSIDriver total 8 items received
I0907 05:13:47.879139       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-control-plane-z4nt5"
I0907 05:13:49.739791       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:13:49.871751       1 pv_controller_base.go:528] resyncing PV controller
... skipping 13 lines ...
I0907 05:13:49.873295       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-2546/pvc-cb429]: already bound to "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17"
I0907 05:13:49.872329       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: claim azuredisk-2546/pvc-cb429 found: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:13:49.873568       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: all is bound
I0907 05:13:49.873659       1 pv_controller.go:858] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: set phase Bound
I0907 05:13:49.873703       1 pv_controller.go:861] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: phase Bound already set
I0907 05:13:49.873853       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3380
I0907 05:13:49.873903       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase: Failed, bound to: "azuredisk-2546/pvc-glgk8 (uid: c2e708c6-5f8c-4dae-85e6-6c6935b2607a)", boundByController: true
I0907 05:13:49.873479       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-2546/pvc-cb429] status: set phase Bound
I0907 05:13:49.874066       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is bound to claim azuredisk-2546/pvc-glgk8
I0907 05:13:49.874254       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: claim azuredisk-2546/pvc-glgk8 not found
I0907 05:13:49.874331       1 pv_controller.go:1108] reclaimVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: policy is Delete
I0907 05:13:49.874068       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-cb429] status: phase Bound already set
I0907 05:13:49.874464       1 pv_controller.go:1038] volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" bound to claim "azuredisk-2546/pvc-cb429"
I0907 05:13:49.874522       1 pv_controller.go:1039] volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-cb429 (uid: ddd386ea-75af-40cc-abd1-9f26a09f0c17)", boundByController: true
I0907 05:13:49.874546       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-cb429" status after binding: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:13:49.874429       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]]
I0907 05:13:49.874752       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a] started
I0907 05:13:49.882318       1 pv_controller.go:1341] isVolumeReleased[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is released
I0907 05:13:49.882439       1 pv_controller.go:1405] doDeleteVolume [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]
I0907 05:13:49.907191       1 pv_controller.go:1260] deletion of volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:13:49.907427       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: set phase Failed
I0907 05:13:49.907475       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase Failed already set
E0907 05:13:49.907551       1 goroutinemap.go:150] Operation for "delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]" failed. No retries permitted until 2022-09-07 05:13:50.907492687 +0000 UTC m=+1290.437895078 (durationBeforeRetry 1s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:13:50.247756       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-control-plane-z4nt5 ReadyCondition updated. Updating timestamp.
I0907 05:13:52.255580       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-m4haq0-md-0-7czdt"
I0907 05:13:52.256416       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17 to the node "capz-m4haq0-md-0-7czdt" mounted false
I0907 05:13:52.256448       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a to the node "capz-m4haq0-md-0-7czdt" mounted false
I0907 05:13:52.327393       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a\"}]}}" for node "capz-m4haq0-md-0-7czdt" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a 1}]
I0907 05:13:52.327734       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17") on node "capz-m4haq0-md-0-7czdt" 
... skipping 18 lines ...
I0907 05:13:55.574002       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="84.806µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:50856" resp=200
I0907 05:14:02.103821       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 11 items received
I0907 05:14:04.720843       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:14:04.740442       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:14:04.872495       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:14:04.872609       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3380
I0907 05:14:04.872740       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase: Failed, bound to: "azuredisk-2546/pvc-glgk8 (uid: c2e708c6-5f8c-4dae-85e6-6c6935b2607a)", boundByController: true
I0907 05:14:04.872820       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is bound to claim azuredisk-2546/pvc-glgk8
I0907 05:14:04.872913       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-cb429" with version 3280
I0907 05:14:04.873039       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-cb429]: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:14:04.873085       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-cb429]: volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" found: phase: Bound, bound to: "azuredisk-2546/pvc-cb429 (uid: ddd386ea-75af-40cc-abd1-9f26a09f0c17)", boundByController: true
I0907 05:14:04.873097       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-cb429]: claim is already correctly bound
I0907 05:14:04.873110       1 pv_controller.go:1012] binding volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" to claim "azuredisk-2546/pvc-cb429"
... skipping 18 lines ...
I0907 05:14:04.874227       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: all is bound
I0907 05:14:04.874257       1 pv_controller.go:858] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: set phase Bound
I0907 05:14:04.874327       1 pv_controller.go:861] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: phase Bound already set
I0907 05:14:04.873925       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a] started
I0907 05:14:04.880705       1 pv_controller.go:1341] isVolumeReleased[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is released
I0907 05:14:04.880728       1 pv_controller.go:1405] doDeleteVolume [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]
I0907 05:14:04.904498       1 pv_controller.go:1260] deletion of volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted
I0907 05:14:04.904525       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: set phase Failed
I0907 05:14:04.904536       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase Failed already set
E0907 05:14:04.904621       1 goroutinemap.go:150] Operation for "delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]" failed. No retries permitted until 2022-09-07 05:14:06.90457166 +0000 UTC m=+1306.434974051 (durationBeforeRetry 2s). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-7czdt), could not be deleted"
I0907 05:14:05.574589       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="82.906µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:43076" resp=200
I0907 05:14:06.303469       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:14:07.734034       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17) returned with <nil>
I0907 05:14:07.734301       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17) succeeded
I0907 05:14:07.734324       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17 was detached from node:capz-m4haq0-md-0-7czdt
I0907 05:14:07.734432       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17") on node "capz-m4haq0-md-0-7czdt" 
... skipping 15 lines ...
I0907 05:14:19.873486       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: volume is bound to claim azuredisk-2546/pvc-cb429
I0907 05:14:19.873507       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: claim azuredisk-2546/pvc-cb429 found: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:14:19.873519       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: all is bound
I0907 05:14:19.873526       1 pv_controller.go:858] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: set phase Bound
I0907 05:14:19.873535       1 pv_controller.go:861] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: phase Bound already set
I0907 05:14:19.873548       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3380
I0907 05:14:19.873566       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase: Failed, bound to: "azuredisk-2546/pvc-glgk8 (uid: c2e708c6-5f8c-4dae-85e6-6c6935b2607a)", boundByController: true
I0907 05:14:19.873585       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is bound to claim azuredisk-2546/pvc-glgk8
I0907 05:14:19.873603       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: claim azuredisk-2546/pvc-glgk8 not found
I0907 05:14:19.873611       1 pv_controller.go:1108] reclaimVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: policy is Delete
I0907 05:14:19.873650       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]]
I0907 05:14:19.873677       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a] started
I0907 05:14:19.873941       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-cb429" with version 3280
... skipping 11 lines ...
I0907 05:14:19.876012       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-cb429] status: phase Bound already set
I0907 05:14:19.876169       1 pv_controller.go:1038] volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" bound to claim "azuredisk-2546/pvc-cb429"
I0907 05:14:19.876312       1 pv_controller.go:1039] volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-cb429 (uid: ddd386ea-75af-40cc-abd1-9f26a09f0c17)", boundByController: true
I0907 05:14:19.876478       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-cb429" status after binding: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:14:19.887644       1 pv_controller.go:1341] isVolumeReleased[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is released
I0907 05:14:19.887667       1 pv_controller.go:1405] doDeleteVolume [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]
I0907 05:14:19.887893       1 pv_controller.go:1260] deletion of volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) since it's in attaching or detaching state
I0907 05:14:19.887936       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: set phase Failed
I0907 05:14:19.887948       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase Failed already set
E0907 05:14:19.888088       1 goroutinemap.go:150] Operation for "delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]" failed. No retries permitted until 2022-09-07 05:14:23.887957943 +0000 UTC m=+1323.418360234 (durationBeforeRetry 4s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) since it's in attaching or detaching state"
I0907 05:14:21.665343       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 12 items received
I0907 05:14:22.692565       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 37 items received
I0907 05:14:25.574231       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="71.706µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:46312" resp=200
I0907 05:14:28.155933       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-7czdt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) returned with <nil>
I0907 05:14:28.155985       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a) succeeded
I0907 05:14:28.155996       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a was detached from node:capz-m4haq0-md-0-7czdt
... skipping 8 lines ...
I0907 05:14:34.874031       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: volume is bound to claim azuredisk-2546/pvc-cb429
I0907 05:14:34.874096       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: claim azuredisk-2546/pvc-cb429 found: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:14:34.874121       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: all is bound
I0907 05:14:34.874138       1 pv_controller.go:858] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: set phase Bound
I0907 05:14:34.874150       1 pv_controller.go:861] updating PersistentVolume[pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17]: phase Bound already set
I0907 05:14:34.874190       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3380
I0907 05:14:34.874238       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase: Failed, bound to: "azuredisk-2546/pvc-glgk8 (uid: c2e708c6-5f8c-4dae-85e6-6c6935b2607a)", boundByController: true
I0907 05:14:34.874326       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is bound to claim azuredisk-2546/pvc-glgk8
I0907 05:14:34.874397       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: claim azuredisk-2546/pvc-glgk8 not found
I0907 05:14:34.873863       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-cb429" with version 3280
I0907 05:14:34.874638       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-cb429]: phase: Bound, bound to: "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17", bindCompleted: true, boundByController: true
I0907 05:14:34.874504       1 pv_controller.go:1108] reclaimVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: policy is Delete
I0907 05:14:34.874841       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-cb429]: volume "pvc-ddd386ea-75af-40cc-abd1-9f26a09f0c17" found: phase: Bound, bound to: "azuredisk-2546/pvc-cb429 (uid: ddd386ea-75af-40cc-abd1-9f26a09f0c17)", boundByController: true
... skipping 20 lines ...
I0907 05:14:40.131232       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a
I0907 05:14:40.131280       1 pv_controller.go:1436] volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" deleted
I0907 05:14:40.131295       1 pv_controller.go:1284] deleteVolumeOperation [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: success
I0907 05:14:40.142380       1 pv_protection_controller.go:205] Got event on PV pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a
I0907 05:14:40.142702       1 pv_protection_controller.go:125] Processing PV pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a
I0907 05:14:40.142410       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" with version 3468
I0907 05:14:40.142836       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: phase: Failed, bound to: "azuredisk-2546/pvc-glgk8 (uid: c2e708c6-5f8c-4dae-85e6-6c6935b2607a)", boundByController: true
I0907 05:14:40.142881       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: volume is bound to claim azuredisk-2546/pvc-glgk8
I0907 05:14:40.143074       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: claim azuredisk-2546/pvc-glgk8 not found
I0907 05:14:40.143094       1 pv_controller.go:1108] reclaimVolume[pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a]: policy is Delete
I0907 05:14:40.143112       1 pv_controller.go:1753] scheduleOperation[delete-pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a[fa340d45-8cc4-48a8-955e-47cf7f1220f2]]
I0907 05:14:40.143141       1 pv_controller.go:1232] deleteVolumeOperation [pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a] started
I0907 05:14:40.146897       1 pv_controller.go:1244] Volume "pvc-c2e708c6-5f8c-4dae-85e6-6c6935b2607a" is already being deleted
... skipping 376 lines ...
I0907 05:14:57.888569       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2546, estimate: 0, errors: <nil>
I0907 05:14:57.897702       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2546" (238.032386ms)
I0907 05:14:58.187166       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1598
I0907 05:14:58.223991       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1598, name kube-root-ca.crt, uid 609084ab-ba4b-4867-9fb6-e8cdac5ed7e1, event type delete
I0907 05:14:58.225378       1 publisher.go:181] Finished syncing namespace "azuredisk-1598" (1.646737ms)
I0907 05:14:58.280482       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1598, name default-token-zz2s6, uid ceaee521-a3d9-4deb-b5d9-f03de54caee5, event type delete
E0907 05:14:58.298784       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1598/default: secrets "default-token-9z7fv" is forbidden: unable to create new content in namespace azuredisk-1598 because it is being terminated
I0907 05:14:58.326645       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1598/default), service account deleted, removing tokens
I0907 05:14:58.326979       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1598, name default, uid 571fa702-01b7-412e-bca5-9534827bdf38, event type delete
I0907 05:14:58.327161       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (2.2µs)
I0907 05:14:58.366765       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1598" (2.801µs)
I0907 05:14:58.368256       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1598, estimate: 0, errors: <nil>
I0907 05:14:58.378125       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1598" (193.724503ms)
I0907 05:14:58.687899       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0907 05:14:58.755810       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-qv2dg, uid 44d81d4c-9cbb-4093-b7ce-625a5e40ce62, event type delete
E0907 05:14:58.769571       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-rsksl" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0907 05:14:58.807358       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0907 05:14:58.807633       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid 9caeaef1-d8fb-453c-8328-1910b6b0cf07, event type delete
I0907 05:14:58.807658       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (3.1µs)
I0907 05:14:58.824092       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 923070fb-538a-407b-a7df-e7841089610b, event type delete
I0907 05:14:58.827347       1 publisher.go:181] Finished syncing namespace "azuredisk-3410" (3.474489ms)
I0907 05:14:58.840998       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3410, estimate: 0, errors: <nil>
... skipping 524 lines ...
I0907 05:16:05.649177       1 pv_controller.go:1108] reclaimVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: policy is Delete
I0907 05:16:05.649211       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]]
I0907 05:16:05.649221       1 pv_controller.go:1764] operation "delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]" is already running, skipping
I0907 05:16:05.649248       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eb586b3f-db94-4720-9fc5-7547ae944b83] started
I0907 05:16:05.655095       1 pv_controller.go:1341] isVolumeReleased[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: volume is released
I0907 05:16:05.655116       1 pv_controller.go:1405] doDeleteVolume [pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]
I0907 05:16:05.703300       1 pv_controller.go:1260] deletion of volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
I0907 05:16:05.703458       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: set phase Failed
I0907 05:16:05.703546       1 pv_controller.go:858] updating PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: set phase Failed
I0907 05:16:05.708155       1 pv_protection_controller.go:205] Got event on PV pvc-eb586b3f-db94-4720-9fc5-7547ae944b83
I0907 05:16:05.708413       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" with version 3698
I0907 05:16:05.708653       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: phase: Failed, bound to: "azuredisk-8582/pvc-twnlj (uid: eb586b3f-db94-4720-9fc5-7547ae944b83)", boundByController: true
I0907 05:16:05.708844       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: volume is bound to claim azuredisk-8582/pvc-twnlj
I0907 05:16:05.708998       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: claim azuredisk-8582/pvc-twnlj not found
I0907 05:16:05.709147       1 pv_controller.go:1108] reclaimVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: policy is Delete
I0907 05:16:05.709274       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]]
I0907 05:16:05.709422       1 pv_controller.go:1764] operation "delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]" is already running, skipping
I0907 05:16:05.709939       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" with version 3698
I0907 05:16:05.710153       1 pv_controller.go:879] volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" entered phase "Failed"
I0907 05:16:05.710272       1 pv_controller.go:901] volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted
E0907 05:16:05.710471       1 goroutinemap.go:150] Operation for "delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]" failed. No retries permitted until 2022-09-07 05:16:06.210436326 +0000 UTC m=+1425.740838717 (durationBeforeRetry 500ms). Error: "disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:16:05.710848       1 event.go:291] "Event occurred" object="pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/virtualMachines/capz-m4haq0-md-0-4c66f), could not be deleted"
I0907 05:16:06.390435       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0907 05:16:12.000958       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj" startTime="2022-09-07 05:16:12.000882507 +0000 UTC m=+1431.531284898"
I0907 05:16:12.001040       1 deployment_controller.go:583] "Deployment has been deleted" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj"
I0907 05:16:12.001063       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-lnhvj" duration="165.713µs"
I0907 05:16:14.764838       1 gc_controller.go:161] GC'ing orphaned
... skipping 76 lines ...
I0907 05:16:19.879225       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is bound to claim azuredisk-8582/pvc-jptdm
I0907 05:16:19.879289       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: claim azuredisk-8582/pvc-jptdm found: phase: Bound, bound to: "pvc-968e561b-7ea0-48c2-b400-c7a797623bec", bindCompleted: true, boundByController: true
I0907 05:16:19.879311       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: all is bound
I0907 05:16:19.879320       1 pv_controller.go:858] updating PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: set phase Bound
I0907 05:16:19.879330       1 pv_controller.go:861] updating PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: phase Bound already set
I0907 05:16:19.879350       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" with version 3698
I0907 05:16:19.879372       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: phase: Failed, bound to: "azuredisk-8582/pvc-twnlj (uid: eb586b3f-db94-4720-9fc5-7547ae944b83)", boundByController: true
I0907 05:16:19.879396       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: volume is bound to claim azuredisk-8582/pvc-twnlj
I0907 05:16:19.879416       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: claim azuredisk-8582/pvc-twnlj not found
I0907 05:16:19.879424       1 pv_controller.go:1108] reclaimVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: policy is Delete
I0907 05:16:19.879441       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]]
I0907 05:16:19.879491       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eb586b3f-db94-4720-9fc5-7547ae944b83] started
I0907 05:16:19.890736       1 pv_controller.go:1341] isVolumeReleased[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: volume is released
I0907 05:16:19.890760       1 pv_controller.go:1405] doDeleteVolume [pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]
I0907 05:16:19.890798       1 pv_controller.go:1260] deletion of volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) since it's in attaching or detaching state
I0907 05:16:19.890812       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: set phase Failed
I0907 05:16:19.890824       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: phase Failed already set
E0907 05:16:19.890862       1 goroutinemap.go:150] Operation for "delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]" failed. No retries permitted until 2022-09-07 05:16:20.890835025 +0000 UTC m=+1440.421237316 (durationBeforeRetry 1s). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) since it's in attaching or detaching state"
I0907 05:16:20.275140       1 node_lifecycle_controller.go:1047] Node capz-m4haq0-md-0-4c66f ReadyCondition updated. Updating timestamp.
I0907 05:16:23.767876       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 4 items received
I0907 05:16:25.574062       1 httplog.go:94] "HTTP" verb="GET" URI="/healthz" latency="63.705µs" userAgent="kube-probe/1.21+" srcIP="127.0.0.1:35672" resp=200
I0907 05:16:29.689335       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 4 items received
I0907 05:16:31.204278       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) returned with <nil>
I0907 05:16:31.204336       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83) succeeded
... skipping 11 lines ...
I0907 05:16:34.878391       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is bound to claim azuredisk-8582/pvc-jptdm
I0907 05:16:34.878414       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: claim azuredisk-8582/pvc-jptdm found: phase: Bound, bound to: "pvc-968e561b-7ea0-48c2-b400-c7a797623bec", bindCompleted: true, boundByController: true
I0907 05:16:34.878431       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: all is bound
I0907 05:16:34.878446       1 pv_controller.go:858] updating PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: set phase Bound
I0907 05:16:34.878458       1 pv_controller.go:861] updating PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: phase Bound already set
I0907 05:16:34.878478       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" with version 3698
I0907 05:16:34.878505       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: phase: Failed, bound to: "azuredisk-8582/pvc-twnlj (uid: eb586b3f-db94-4720-9fc5-7547ae944b83)", boundByController: true
I0907 05:16:34.878526       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: volume is bound to claim azuredisk-8582/pvc-twnlj
I0907 05:16:34.878551       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: claim azuredisk-8582/pvc-twnlj not found
I0907 05:16:34.878561       1 pv_controller.go:1108] reclaimVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: policy is Delete
I0907 05:16:34.878578       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]]
I0907 05:16:34.878598       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" with version 3534
I0907 05:16:34.878619       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: phase: Bound, bound to: "azuredisk-8582/pvc-6wqhl (uid: ea0d49c5-a43e-420f-a05d-83d21c7764e1)", boundByController: true
... skipping 42 lines ...
I0907 05:16:40.179620       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83
I0907 05:16:40.179659       1 pv_controller.go:1436] volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" deleted
I0907 05:16:40.179673       1 pv_controller.go:1284] deleteVolumeOperation [pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: success
I0907 05:16:40.190665       1 pv_protection_controller.go:205] Got event on PV pvc-eb586b3f-db94-4720-9fc5-7547ae944b83
I0907 05:16:40.190813       1 pv_protection_controller.go:125] Processing PV pvc-eb586b3f-db94-4720-9fc5-7547ae944b83
I0907 05:16:40.190661       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" with version 3753
I0907 05:16:40.191018       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: phase: Failed, bound to: "azuredisk-8582/pvc-twnlj (uid: eb586b3f-db94-4720-9fc5-7547ae944b83)", boundByController: true
I0907 05:16:40.191376       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: volume is bound to claim azuredisk-8582/pvc-twnlj
I0907 05:16:40.191406       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: claim azuredisk-8582/pvc-twnlj not found
I0907 05:16:40.191415       1 pv_controller.go:1108] reclaimVolume[pvc-eb586b3f-db94-4720-9fc5-7547ae944b83]: policy is Delete
I0907 05:16:40.191432       1 pv_controller.go:1753] scheduleOperation[delete-pvc-eb586b3f-db94-4720-9fc5-7547ae944b83[6edb7e1e-ac6c-4a9a-8201-8ec71585d8b0]]
I0907 05:16:40.191465       1 pv_controller.go:1232] deleteVolumeOperation [pvc-eb586b3f-db94-4720-9fc5-7547ae944b83] started
I0907 05:16:40.196298       1 pv_controller.go:1244] Volume "pvc-eb586b3f-db94-4720-9fc5-7547ae944b83" is already being deleted
... skipping 45 lines ...
I0907 05:16:41.053497       1 pv_controller.go:1108] reclaimVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: policy is Delete
I0907 05:16:41.053509       1 pv_controller.go:1753] scheduleOperation[delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]]
I0907 05:16:41.053517       1 pv_controller.go:1764] operation "delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]" is already running, skipping
I0907 05:16:41.053545       1 pv_controller.go:1232] deleteVolumeOperation [pvc-968e561b-7ea0-48c2-b400-c7a797623bec] started
I0907 05:16:41.056487       1 pv_controller.go:1341] isVolumeReleased[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is released
I0907 05:16:41.056507       1 pv_controller.go:1405] doDeleteVolume [pvc-968e561b-7ea0-48c2-b400-c7a797623bec]
I0907 05:16:41.056544       1 pv_controller.go:1260] deletion of volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-968e561b-7ea0-48c2-b400-c7a797623bec) since it's in attaching or detaching state
I0907 05:16:41.056562       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: set phase Failed
I0907 05:16:41.056572       1 pv_controller.go:858] updating PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: set phase Failed
I0907 05:16:41.060514       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" with version 3761
I0907 05:16:41.060547       1 pv_controller.go:879] volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" entered phase "Failed"
I0907 05:16:41.060559       1 pv_controller.go:901] volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-968e561b-7ea0-48c2-b400-c7a797623bec) since it's in attaching or detaching state
E0907 05:16:41.060606       1 goroutinemap.go:150] Operation for "delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]" failed. No retries permitted until 2022-09-07 05:16:41.56057968 +0000 UTC m=+1461.090982071 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-968e561b-7ea0-48c2-b400-c7a797623bec) since it's in attaching or detaching state"
I0907 05:16:41.060879       1 event.go:291] "Event occurred" object="pvc-968e561b-7ea0-48c2-b400-c7a797623bec" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-968e561b-7ea0-48c2-b400-c7a797623bec) since it's in attaching or detaching state"
I0907 05:16:41.060919       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" with version 3761
I0907 05:16:41.060947       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: phase: Failed, bound to: "azuredisk-8582/pvc-jptdm (uid: 968e561b-7ea0-48c2-b400-c7a797623bec)", boundByController: true
I0907 05:16:41.060980       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is bound to claim azuredisk-8582/pvc-jptdm
I0907 05:16:41.060999       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: claim azuredisk-8582/pvc-jptdm not found
I0907 05:16:41.061008       1 pv_controller.go:1108] reclaimVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: policy is Delete
I0907 05:16:41.061022       1 pv_controller.go:1753] scheduleOperation[delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]]
I0907 05:16:41.061031       1 pv_controller.go:1766] operation "delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]" postponed due to exponential backoff
I0907 05:16:41.061236       1 pv_protection_controller.go:205] Got event on PV pvc-968e561b-7ea0-48c2-b400-c7a797623bec
... skipping 29 lines ...
I0907 05:16:49.880458       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-6wqhl" status after binding: phase: Bound, bound to: "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1", bindCompleted: true, boundByController: true
I0907 05:16:49.880539       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: claim azuredisk-8582/pvc-6wqhl found: phase: Bound, bound to: "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1", bindCompleted: true, boundByController: true
I0907 05:16:49.880566       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: all is bound
I0907 05:16:49.880687       1 pv_controller.go:858] updating PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: set phase Bound
I0907 05:16:49.880721       1 pv_controller.go:861] updating PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: phase Bound already set
I0907 05:16:49.880762       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" with version 3761
I0907 05:16:49.880833       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: phase: Failed, bound to: "azuredisk-8582/pvc-jptdm (uid: 968e561b-7ea0-48c2-b400-c7a797623bec)", boundByController: true
I0907 05:16:49.880911       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is bound to claim azuredisk-8582/pvc-jptdm
I0907 05:16:49.880938       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: claim azuredisk-8582/pvc-jptdm not found
I0907 05:16:49.881004       1 pv_controller.go:1108] reclaimVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: policy is Delete
I0907 05:16:49.881089       1 pv_controller.go:1753] scheduleOperation[delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]]
I0907 05:16:49.881175       1 pv_controller.go:1232] deleteVolumeOperation [pvc-968e561b-7ea0-48c2-b400-c7a797623bec] started
I0907 05:16:49.894043       1 pv_controller.go:1341] isVolumeReleased[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is released
... skipping 3 lines ...
I0907 05:16:55.175572       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-968e561b-7ea0-48c2-b400-c7a797623bec
I0907 05:16:55.175610       1 pv_controller.go:1436] volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" deleted
I0907 05:16:55.175625       1 pv_controller.go:1284] deleteVolumeOperation [pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: success
I0907 05:16:55.182263       1 pv_protection_controller.go:205] Got event on PV pvc-968e561b-7ea0-48c2-b400-c7a797623bec
I0907 05:16:55.182301       1 pv_protection_controller.go:125] Processing PV pvc-968e561b-7ea0-48c2-b400-c7a797623bec
I0907 05:16:55.182669       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-968e561b-7ea0-48c2-b400-c7a797623bec" with version 3782
I0907 05:16:55.182720       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: phase: Failed, bound to: "azuredisk-8582/pvc-jptdm (uid: 968e561b-7ea0-48c2-b400-c7a797623bec)", boundByController: true
I0907 05:16:55.182751       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: volume is bound to claim azuredisk-8582/pvc-jptdm
I0907 05:16:55.182777       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: claim azuredisk-8582/pvc-jptdm not found
I0907 05:16:55.182786       1 pv_controller.go:1108] reclaimVolume[pvc-968e561b-7ea0-48c2-b400-c7a797623bec]: policy is Delete
I0907 05:16:55.182803       1 pv_controller.go:1753] scheduleOperation[delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]]
I0907 05:16:55.182811       1 pv_controller.go:1764] operation "delete-pvc-968e561b-7ea0-48c2-b400-c7a797623bec[35883d9a-953a-44e3-96f5-ac781406e675]" is already running, skipping
I0907 05:16:55.198400       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-968e561b-7ea0-48c2-b400-c7a797623bec
... skipping 46 lines ...
I0907 05:16:56.328012       1 pv_controller.go:1108] reclaimVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: policy is Delete
I0907 05:16:56.328023       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]]
I0907 05:16:56.328030       1 pv_controller.go:1764] operation "delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]" is already running, skipping
I0907 05:16:56.328091       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1] started
I0907 05:16:56.330151       1 pv_controller.go:1341] isVolumeReleased[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: volume is released
I0907 05:16:56.330171       1 pv_controller.go:1405] doDeleteVolume [pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]
I0907 05:16:56.330230       1 pv_controller.go:1260] deletion of volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1) since it's in attaching or detaching state
I0907 05:16:56.330261       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: set phase Failed
I0907 05:16:56.330326       1 pv_controller.go:858] updating PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: set phase Failed
I0907 05:16:56.333849       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" with version 3788
I0907 05:16:56.334049       1 pv_controller.go:879] volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" entered phase "Failed"
I0907 05:16:56.334230       1 pv_controller.go:901] volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1) since it's in attaching or detaching state
I0907 05:16:56.333857       1 pv_protection_controller.go:205] Got event on PV pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1
I0907 05:16:56.333877       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" with version 3788
I0907 05:16:56.334473       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: phase: Failed, bound to: "azuredisk-8582/pvc-6wqhl (uid: ea0d49c5-a43e-420f-a05d-83d21c7764e1)", boundByController: true
I0907 05:16:56.334526       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: volume is bound to claim azuredisk-8582/pvc-6wqhl
I0907 05:16:56.334552       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: claim azuredisk-8582/pvc-6wqhl not found
I0907 05:16:56.334561       1 pv_controller.go:1108] reclaimVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: policy is Delete
I0907 05:16:56.334576       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]]
E0907 05:16:56.334718       1 goroutinemap.go:150] Operation for "delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]" failed. No retries permitted until 2022-09-07 05:16:56.834415863 +0000 UTC m=+1476.364818254 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1) since it's in attaching or detaching state"
I0907 05:16:56.334831       1 pv_controller.go:1766] operation "delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]" postponed due to exponential backoff
I0907 05:16:56.334991       1 event.go:291] "Event occurred" object="pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1) since it's in attaching or detaching state"
I0907 05:16:58.203460       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0907 05:17:02.122963       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0907 05:17:02.991026       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1) returned with <nil>
I0907 05:17:02.991086       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1) succeeded
I0907 05:17:02.991102       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1 was detached from node:capz-m4haq0-md-0-4c66f
I0907 05:17:02.991134       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:17:04.326029       1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 14 items received
I0907 05:17:04.725297       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:17:04.747644       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:17:04.879506       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:17:04.879614       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" with version 3788
I0907 05:17:04.879714       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: phase: Failed, bound to: "azuredisk-8582/pvc-6wqhl (uid: ea0d49c5-a43e-420f-a05d-83d21c7764e1)", boundByController: true
I0907 05:17:04.879842       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: volume is bound to claim azuredisk-8582/pvc-6wqhl
I0907 05:17:04.879883       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: claim azuredisk-8582/pvc-6wqhl not found
I0907 05:17:04.879894       1 pv_controller.go:1108] reclaimVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: policy is Delete
I0907 05:17:04.879913       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]]
I0907 05:17:04.879958       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1] started
I0907 05:17:04.885889       1 pv_controller.go:1341] isVolumeReleased[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: volume is released
... skipping 3 lines ...
I0907 05:17:12.845817       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1
I0907 05:17:12.845875       1 pv_controller.go:1436] volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" deleted
I0907 05:17:12.846238       1 pv_controller.go:1284] deleteVolumeOperation [pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: success
I0907 05:17:12.864797       1 pv_protection_controller.go:205] Got event on PV pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1
I0907 05:17:12.864835       1 pv_protection_controller.go:125] Processing PV pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1
I0907 05:17:12.865103       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" with version 3813
I0907 05:17:12.865296       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: phase: Failed, bound to: "azuredisk-8582/pvc-6wqhl (uid: ea0d49c5-a43e-420f-a05d-83d21c7764e1)", boundByController: true
I0907 05:17:12.865456       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: volume is bound to claim azuredisk-8582/pvc-6wqhl
I0907 05:17:12.865567       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: claim azuredisk-8582/pvc-6wqhl not found
I0907 05:17:12.865692       1 pv_controller.go:1108] reclaimVolume[pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1]: policy is Delete
I0907 05:17:12.865928       1 pv_controller.go:1753] scheduleOperation[delete-pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1[8a619c93-8d51-45ca-9aef-e675e65d1b62]]
I0907 05:17:12.866151       1 pv_controller.go:1232] deleteVolumeOperation [pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1] started
I0907 05:17:12.870376       1 pv_controller.go:1244] Volume "pvc-ea0d49c5-a43e-420f-a05d-83d21c7764e1" is already being deleted
... skipping 159 lines ...
I0907 05:17:22.347771       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
I0907 05:17:22.363593       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7726" (236.103904ms)
I0907 05:17:22.374249       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" lun 0 to node "capz-m4haq0-md-0-4c66f".
I0907 05:17:22.374295       1 azure_controller_standard.go:93] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - attach disk(capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) with DiskEncryptionSetID()
I0907 05:17:22.633113       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3086
I0907 05:17:22.665198       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-lpt75, uid 70926d65-1afe-407d-9fea-11a257d66cda, event type delete
E0907 05:17:22.684386       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-d9zl6" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0907 05:17:22.738208       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid e4b99803-b331-45ce-96e4-7770bd876d48, event type delete
I0907 05:17:22.741107       1 publisher.go:181] Finished syncing namespace "azuredisk-3086" (3.029432ms)
I0907 05:17:22.774041       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0907 05:17:22.774253       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 3c06e897-4e16-49d6-ab91-9fe020957737, event type delete
I0907 05:17:22.774291       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3µs)
I0907 05:17:22.800290       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0907 05:17:22.800297       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.1µs)
I0907 05:17:22.810298       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (180.402333ms)
I0907 05:17:23.141999       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1387
I0907 05:17:23.236562       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-j4wfh, uid 422f4eff-a831-489c-a5ce-ce27f9b76b69, event type delete
E0907 05:17:23.257069       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-lgg5n" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0907 05:17:23.296150       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid 8054d3da-fa18-49b7-9c41-bb67e1031352, event type delete
I0907 05:17:23.297587       1 publisher.go:181] Finished syncing namespace "azuredisk-1387" (1.761535ms)
I0907 05:17:23.306847       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0907 05:17:23.306927       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid f10ec81a-3c17-4328-9c01-9b4a4ed12ab0, event type delete
I0907 05:17:23.307000       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (1.9µs)
I0907 05:17:23.344296       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (3.1µs)
... skipping 255 lines ...
I0907 05:18:29.454096       1 pv_controller.go:1108] reclaimVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: policy is Delete
I0907 05:18:29.454106       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]]
I0907 05:18:29.454113       1 pv_controller.go:1764] operation "delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]" is already running, skipping
I0907 05:18:29.454141       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8] started
I0907 05:18:29.456646       1 pv_controller.go:1341] isVolumeReleased[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: volume is released
I0907 05:18:29.456668       1 pv_controller.go:1405] doDeleteVolume [pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]
I0907 05:18:29.456706       1 pv_controller.go:1260] deletion of volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) since it's in attaching or detaching state
I0907 05:18:29.456722       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: set phase Failed
I0907 05:18:29.456732       1 pv_controller.go:858] updating PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: set phase Failed
I0907 05:18:29.464001       1 pv_protection_controller.go:205] Got event on PV pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8
I0907 05:18:29.464204       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" with version 4032
I0907 05:18:29.464457       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: phase: Failed, bound to: "azuredisk-7051/pvc-f66gr (uid: 3ae59c3b-cb76-4add-9cfe-22a38db223c8)", boundByController: true
I0907 05:18:29.464485       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: volume is bound to claim azuredisk-7051/pvc-f66gr
I0907 05:18:29.464567       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: claim azuredisk-7051/pvc-f66gr not found
I0907 05:18:29.464672       1 pv_controller.go:1108] reclaimVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: policy is Delete
I0907 05:18:29.464749       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]]
I0907 05:18:29.464790       1 pv_controller.go:1764] operation "delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]" is already running, skipping
I0907 05:18:29.464381       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" with version 4032
I0907 05:18:29.464818       1 pv_controller.go:879] volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" entered phase "Failed"
I0907 05:18:29.464829       1 pv_controller.go:901] volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) since it's in attaching or detaching state
E0907 05:18:29.464880       1 goroutinemap.go:150] Operation for "delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]" failed. No retries permitted until 2022-09-07 05:18:29.964851433 +0000 UTC m=+1569.495253724 (durationBeforeRetry 500ms). Error: "failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) since it's in attaching or detaching state"
I0907 05:18:29.465187       1 event.go:291] "Event occurred" object="pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) since it's in attaching or detaching state"
I0907 05:18:31.265217       1 azure_controller_standard.go:184] azureDisk - update(capz-m4haq0): vm(capz-m4haq0-md-0-4c66f) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) returned with <nil>
I0907 05:18:31.265265       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8) succeeded
I0907 05:18:31.265277       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8 was detached from node:capz-m4haq0-md-0-4c66f
I0907 05:18:31.265305       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8") on node "capz-m4haq0-md-0-4c66f" 
I0907 05:18:33.479091       1 reflector.go:530] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0907 05:18:34.728151       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:18:34.751714       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:18:34.770125       1 gc_controller.go:161] GC'ing orphaned
I0907 05:18:34.770175       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 05:18:34.884394       1 pv_controller_base.go:528] resyncing PV controller
I0907 05:18:34.884470       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" with version 4032
I0907 05:18:34.884511       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: phase: Failed, bound to: "azuredisk-7051/pvc-f66gr (uid: 3ae59c3b-cb76-4add-9cfe-22a38db223c8)", boundByController: true
I0907 05:18:34.884559       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: volume is bound to claim azuredisk-7051/pvc-f66gr
I0907 05:18:34.884593       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: claim azuredisk-7051/pvc-f66gr not found
I0907 05:18:34.884603       1 pv_controller.go:1108] reclaimVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: policy is Delete
I0907 05:18:34.884622       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]]
I0907 05:18:34.884667       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8] started
I0907 05:18:34.890349       1 pv_controller.go:1341] isVolumeReleased[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: volume is released
... skipping 4 lines ...
I0907 05:18:40.265420       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-m4haq0/providers/Microsoft.Compute/disks/capz-m4haq0-dynamic-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8
I0907 05:18:40.265462       1 pv_controller.go:1436] volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" deleted
I0907 05:18:40.265477       1 pv_controller.go:1284] deleteVolumeOperation [pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: success
I0907 05:18:40.275571       1 pv_protection_controller.go:205] Got event on PV pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8
I0907 05:18:40.275866       1 pv_protection_controller.go:125] Processing PV pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8
I0907 05:18:40.275951       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" with version 4050
I0907 05:18:40.276501       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: phase: Failed, bound to: "azuredisk-7051/pvc-f66gr (uid: 3ae59c3b-cb76-4add-9cfe-22a38db223c8)", boundByController: true
I0907 05:18:40.276632       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: volume is bound to claim azuredisk-7051/pvc-f66gr
I0907 05:18:40.276754       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: claim azuredisk-7051/pvc-f66gr not found
I0907 05:18:40.276792       1 pv_controller.go:1108] reclaimVolume[pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8]: policy is Delete
I0907 05:18:40.276871       1 pv_controller.go:1753] scheduleOperation[delete-pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8[b944c0f6-316b-4871-9b24-def8ebe10d52]]
I0907 05:18:40.276993       1 pv_controller.go:1232] deleteVolumeOperation [pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8] started
I0907 05:18:40.281442       1 pv_controller.go:1244] Volume "pvc-3ae59c3b-cb76-4add-9cfe-22a38db223c8" is already being deleted
... skipping 155 lines ...
I0907 05:18:49.748761       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-wlxgt.17127c480d7c1d07, uid ef755ce7-7fb3-43e0-ad57-257530cecd0f, event type delete
I0907 05:18:49.752234       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-wlxgt.17127c48af2c926d, uid dae647a1-31b4-4d37-a05d-efaa731d96eb, event type delete
I0907 05:18:49.752324       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 05:18:49.756414       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name pvc-f66gr.17127c42dfd355aa, uid c0ba989b-14e9-46b0-ae0f-c9584bbd9060, event type delete
I0907 05:18:49.760513       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name pvc-f66gr.17127c436b96589b, uid 0c499067-81c9-42f1-bd1b-1836f00fbd10, event type delete
I0907 05:18:49.828710       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-v2wsh, uid beffda9c-fbbd-4a0d-bdd2-cac749f0fef5, event type delete
E0907 05:18:49.844333       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-xbng6" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0907 05:18:49.853936       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0907 05:18:49.853990       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid 032b639b-3b63-4ea2-b372-1bd98bb18527, event type delete
I0907 05:18:49.854270       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (2.9µs)
I0907 05:18:49.877786       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7051, estimate: 0, errors: <nil>
I0907 05:18:49.877803       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (2.4µs)
I0907 05:18:49.884834       1 pv_controller_base.go:528] resyncing PV controller
... skipping 471 lines ...
I0907 05:20:32.284087       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8470" (3µs)
2022/09/07 05:20:33 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1420.389 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-v2dww, container manager
STEP: Dumping workload cluster default/capz-m4haq0 logs
Sep  7 05:22:00.318: INFO: Collecting logs for Linux node capz-m4haq0-control-plane-z4nt5 in cluster capz-m4haq0 in namespace default

Sep  7 05:23:00.320: INFO: Collecting boot logs for AzureMachine capz-m4haq0-control-plane-z4nt5

Failed to get logs for machine capz-m4haq0-control-plane-fxgtd, cluster default/capz-m4haq0: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 05:23:01.293: INFO: Collecting logs for Linux node capz-m4haq0-md-0-7czdt in cluster capz-m4haq0 in namespace default

Sep  7 05:24:01.295: INFO: Collecting boot logs for AzureMachine capz-m4haq0-md-0-7czdt

Failed to get logs for machine capz-m4haq0-md-0-6cc9ccfc4c-p8fmx, cluster default/capz-m4haq0: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 05:24:01.666: INFO: Collecting logs for Linux node capz-m4haq0-md-0-4c66f in cluster capz-m4haq0 in namespace default

Sep  7 05:25:01.668: INFO: Collecting boot logs for AzureMachine capz-m4haq0-md-0-4c66f

Failed to get logs for machine capz-m4haq0-md-0-6cc9ccfc4c-z2chm, cluster default/capz-m4haq0: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-m4haq0 kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-8m59c, container calico-kube-controllers
STEP: Fetching kube-system pod logs took 430.503198ms
STEP: Dumping workload cluster default/capz-m4haq0 Azure activity log
STEP: Collecting events for Pod kube-system/metrics-server-8c95fb79b-zfj6f
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-8m59c
STEP: Creating log watcher for controller kube-system/calico-node-7bv29, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-7bv29
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-m4haq0-control-plane-z4nt5, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-m4haq0-control-plane-z4nt5
STEP: failed to find events of Pod "kube-controller-manager-capz-m4haq0-control-plane-z4nt5"
STEP: Creating log watcher for controller kube-system/kube-proxy-5xxg9, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-5xxg9
STEP: Creating log watcher for controller kube-system/kube-proxy-68jvg, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-68jvg
STEP: Creating log watcher for controller kube-system/kube-proxy-9vlsp, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-proxy-9vlsp
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-m4haq0-control-plane-z4nt5, container kube-scheduler
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-m4haq0-control-plane-z4nt5
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-m4haq0-control-plane-z4nt5
STEP: Creating log watcher for controller kube-system/metrics-server-8c95fb79b-zfj6f, container metrics-server
STEP: failed to find events of Pod "kube-apiserver-capz-m4haq0-control-plane-z4nt5"
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-829x6
STEP: Collecting events for Pod kube-system/calico-node-m6ndw
STEP: Creating log watcher for controller kube-system/calico-node-x2ksp, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-x2ksp
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-829x6, container coredns
STEP: Creating log watcher for controller kube-system/etcd-capz-m4haq0-control-plane-z4nt5, container etcd
STEP: Creating log watcher for controller kube-system/coredns-558bd4d5db-w9crw, container coredns
STEP: Collecting events for Pod kube-system/coredns-558bd4d5db-w9crw
STEP: Collecting events for Pod kube-system/etcd-capz-m4haq0-control-plane-z4nt5
STEP: failed to find events of Pod "etcd-capz-m4haq0-control-plane-z4nt5"
STEP: Creating log watcher for controller kube-system/calico-node-m6ndw, container calico-node
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-m4haq0-control-plane-z4nt5, container kube-apiserver
STEP: failed to find events of Pod "kube-scheduler-capz-m4haq0-control-plane-z4nt5"
STEP: Fetching activity logs took 3.102688008s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-m4haq0" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...