This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-05 20:09
Elapsed50m11s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 628 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 250 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  5 20:28:23.795: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [1.037 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 20:28:29.542: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qxqhk" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Sep  5 20:28:29.651: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 109.709661ms
Sep  5 20:28:31.761: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219697547s
Sep  5 20:28:33.873: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330987368s
Sep  5 20:28:35.983: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.441792066s
Sep  5 20:28:38.094: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552425533s
Sep  5 20:28:40.203: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661780736s
Sep  5 20:28:42.315: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773722747s
Sep  5 20:28:44.428: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 14.885981929s
Sep  5 20:28:46.540: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 16.998671207s
Sep  5 20:28:48.658: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 19.115829575s
Sep  5 20:28:50.775: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Pending", Reason="", readiness=false. Elapsed: 21.23288614s
Sep  5 20:28:52.892: INFO: Pod "azuredisk-volume-tester-qxqhk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.350712122s
STEP: Saw pod success
Sep  5 20:28:52.892: INFO: Pod "azuredisk-volume-tester-qxqhk" satisfied condition "Succeeded or Failed"
Sep  5 20:28:52.892: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-qxqhk"
Sep  5 20:28:53.017: INFO: Pod azuredisk-volume-tester-qxqhk has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-qxqhk in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:28:53.369: INFO: deleting PVC "azuredisk-1353"/"pvc-x4srj"
Sep  5 20:28:53.369: INFO: Deleting PersistentVolumeClaim "pvc-x4srj"
STEP: waiting for claim's PV "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" to be deleted
Sep  5 20:28:53.480: INFO: Waiting up to 10m0s for PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 to get deleted
Sep  5 20:28:53.589: INFO: PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 found and phase=Failed (108.925425ms)
Sep  5 20:28:58.702: INFO: PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 found and phase=Failed (5.22251461s)
Sep  5 20:29:03.812: INFO: PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 found and phase=Failed (10.332110816s)
Sep  5 20:29:08.922: INFO: PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 found and phase=Failed (15.442010849s)
Sep  5 20:29:14.035: INFO: PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 found and phase=Failed (20.554747186s)
Sep  5 20:29:19.144: INFO: PersistentVolume pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 was removed
Sep  5 20:29:19.144: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  5 20:29:19.253: INFO: Claim "azuredisk-1353" in namespace "pvc-x4srj" doesn't exist in the system
Sep  5 20:29:19.253: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-9c7qv
Sep  5 20:29:19.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  5 20:29:42.383: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-225x7"
Sep  5 20:29:42.496: INFO: Error getting logs for pod azuredisk-volume-tester-225x7: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-225x7)
STEP: Deleting pod azuredisk-volume-tester-225x7 in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:29:42.827: INFO: deleting PVC "azuredisk-1563"/"pvc-zdzrv"
Sep  5 20:29:42.827: INFO: Deleting PersistentVolumeClaim "pvc-zdzrv"
STEP: waiting for claim's PV "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" to be deleted
... skipping 17 lines ...
Sep  5 20:31:04.854: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Bound (1m21.917388353s)
Sep  5 20:31:09.964: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Bound (1m27.02716001s)
Sep  5 20:31:15.073: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Bound (1m32.136887929s)
Sep  5 20:31:20.184: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Bound (1m37.247733789s)
Sep  5 20:31:25.296: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Bound (1m42.359129658s)
Sep  5 20:31:30.407: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Bound (1m47.470760587s)
Sep  5 20:31:35.522: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Failed (1m52.585112154s)
Sep  5 20:31:40.635: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Failed (1m57.698419381s)
Sep  5 20:31:45.746: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Failed (2m2.809164084s)
Sep  5 20:31:50.858: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Failed (2m7.921276029s)
Sep  5 20:31:55.972: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Failed (2m13.035115269s)
Sep  5 20:32:01.082: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca found and phase=Failed (2m18.145138601s)
Sep  5 20:32:06.192: INFO: PersistentVolume pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca was removed
Sep  5 20:32:06.192: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Sep  5 20:32:06.304: INFO: Claim "azuredisk-1563" in namespace "pvc-zdzrv" doesn't exist in the system
Sep  5 20:32:06.304: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-9q8vn
Sep  5 20:32:06.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 20:32:08.271: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-cv4mc" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Sep  5 20:32:08.380: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 109.307316ms
Sep  5 20:32:10.492: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220710333s
Sep  5 20:32:12.605: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.333814506s
Sep  5 20:32:14.716: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.444869784s
Sep  5 20:32:16.826: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.554747802s
Sep  5 20:32:18.936: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664831055s
Sep  5 20:32:21.047: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.7765202s
Sep  5 20:32:23.166: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.895250296s
Sep  5 20:32:25.283: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 17.012270971s
Sep  5 20:32:27.400: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.129031202s
Sep  5 20:32:29.524: INFO: Pod "azuredisk-volume-tester-cv4mc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 21.253447437s
STEP: Saw pod success
Sep  5 20:32:29.524: INFO: Pod "azuredisk-volume-tester-cv4mc" satisfied condition "Succeeded or Failed"
Sep  5 20:32:29.524: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-cv4mc"
Sep  5 20:32:29.648: INFO: Pod azuredisk-volume-tester-cv4mc has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-cv4mc in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:32:29.993: INFO: deleting PVC "azuredisk-7463"/"pvc-k7wmw"
Sep  5 20:32:29.993: INFO: Deleting PersistentVolumeClaim "pvc-k7wmw"
STEP: waiting for claim's PV "pvc-4818e932-c294-4904-bc10-3f517dd0280d" to be deleted
Sep  5 20:32:30.106: INFO: Waiting up to 10m0s for PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d to get deleted
Sep  5 20:32:30.215: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (109.226804ms)
Sep  5 20:32:35.325: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (5.219308694s)
Sep  5 20:32:40.438: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (10.332334269s)
Sep  5 20:32:45.552: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (15.446197502s)
Sep  5 20:32:50.666: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (20.56030385s)
Sep  5 20:32:55.779: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (25.673065382s)
Sep  5 20:33:00.889: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d found and phase=Failed (30.783276353s)
Sep  5 20:33:05.999: INFO: PersistentVolume pvc-4818e932-c294-4904-bc10-3f517dd0280d was removed
Sep  5 20:33:05.999: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7463 to be removed
Sep  5 20:33:06.110: INFO: Claim "azuredisk-7463" in namespace "pvc-k7wmw" doesn't exist in the system
Sep  5 20:33:06.110: INFO: deleting StorageClass azuredisk-7463-kubernetes.io-azure-disk-dynamic-sc-w9dgb
Sep  5 20:33:06.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7463" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  5 20:33:08.073: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6t9lm" in namespace "azuredisk-9241" to be "Error status code"
Sep  5 20:33:08.183: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 110.170245ms
Sep  5 20:33:10.294: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221034091s
Sep  5 20:33:12.405: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331782878s
Sep  5 20:33:14.515: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44216018s
Sep  5 20:33:16.626: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.552765725s
Sep  5 20:33:18.737: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 10.664042089s
Sep  5 20:33:20.847: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 12.773975054s
Sep  5 20:33:22.959: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 14.886000977s
Sep  5 20:33:25.076: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 17.003153697s
Sep  5 20:33:27.193: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Pending", Reason="", readiness=false. Elapsed: 19.12052949s
Sep  5 20:33:29.314: INFO: Pod "azuredisk-volume-tester-6t9lm": Phase="Failed", Reason="", readiness=false. Elapsed: 21.24103033s
STEP: Saw pod failure
Sep  5 20:33:29.314: INFO: Pod "azuredisk-volume-tester-6t9lm" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  5 20:33:29.433: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-6t9lm"
Sep  5 20:33:29.546: INFO: Pod azuredisk-volume-tester-6t9lm has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-6t9lm in namespace azuredisk-9241
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:33:29.891: INFO: deleting PVC "azuredisk-9241"/"pvc-tnsrj"
Sep  5 20:33:29.891: INFO: Deleting PersistentVolumeClaim "pvc-tnsrj"
STEP: waiting for claim's PV "pvc-58d5db18-b486-467c-8e46-4fc75532e379" to be deleted
Sep  5 20:33:30.013: INFO: Waiting up to 10m0s for PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 to get deleted
Sep  5 20:33:30.123: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (109.883338ms)
Sep  5 20:33:35.233: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (5.219886975s)
Sep  5 20:33:40.344: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (10.331006921s)
Sep  5 20:33:45.458: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (15.445478144s)
Sep  5 20:33:50.572: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (20.559430042s)
Sep  5 20:33:55.683: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (25.670155354s)
Sep  5 20:34:00.797: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 found and phase=Failed (30.784162394s)
Sep  5 20:34:05.911: INFO: PersistentVolume pvc-58d5db18-b486-467c-8e46-4fc75532e379 was removed
Sep  5 20:34:05.911: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  5 20:34:06.020: INFO: Claim "azuredisk-9241" in namespace "pvc-tnsrj" doesn't exist in the system
Sep  5 20:34:06.020: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-ht6xp
Sep  5 20:34:06.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9241" for this suite.
... skipping 53 lines ...
Sep  5 20:35:01.139: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Bound (5.222975425s)
Sep  5 20:35:06.251: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Bound (10.334554254s)
Sep  5 20:35:11.361: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Bound (15.444952004s)
Sep  5 20:35:16.471: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Bound (20.555080075s)
Sep  5 20:35:21.585: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Bound (25.668938768s)
Sep  5 20:35:26.699: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Bound (30.783207702s)
Sep  5 20:35:31.811: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (35.894729538s)
Sep  5 20:35:36.924: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (41.007259963s)
Sep  5 20:35:42.037: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (46.121170454s)
Sep  5 20:35:47.149: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (51.232702899s)
Sep  5 20:35:52.262: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (56.345356224s)
Sep  5 20:35:57.372: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (1m1.455792335s)
Sep  5 20:36:02.485: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da found and phase=Failed (1m6.569082165s)
Sep  5 20:36:07.596: INFO: PersistentVolume pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da was removed
Sep  5 20:36:07.596: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  5 20:36:07.705: INFO: Claim "azuredisk-9336" in namespace "pvc-hqr8j" doesn't exist in the system
Sep  5 20:36:07.705: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-sx84b
Sep  5 20:36:07.816: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-qgjmb"
Sep  5 20:36:07.931: INFO: Pod azuredisk-volume-tester-qgjmb has the following logs: 
... skipping 8 lines ...
Sep  5 20:36:13.614: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Bound (5.223123128s)
Sep  5 20:36:18.728: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Bound (10.336834061s)
Sep  5 20:36:23.839: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Bound (15.448256811s)
Sep  5 20:36:28.950: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Bound (20.559243117s)
Sep  5 20:36:34.063: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Bound (25.671498549s)
Sep  5 20:36:39.175: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Bound (30.783627147s)
Sep  5 20:36:44.288: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Failed (35.897185699s)
Sep  5 20:36:49.398: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Failed (41.00700212s)
Sep  5 20:36:54.510: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Failed (46.118642551s)
Sep  5 20:36:59.624: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba found and phase=Failed (51.2329679s)
Sep  5 20:37:04.738: INFO: PersistentVolume pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba was removed
Sep  5 20:37:04.738: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  5 20:37:04.847: INFO: Claim "azuredisk-9336" in namespace "pvc-j6psx" doesn't exist in the system
Sep  5 20:37:04.847: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-twt6v
Sep  5 20:37:04.958: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-fzt5f"
Sep  5 20:37:05.084: INFO: Pod azuredisk-volume-tester-fzt5f has the following logs: 
... skipping 7 lines ...
Sep  5 20:37:05.645: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Bound (110.035197ms)
Sep  5 20:37:10.757: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Bound (5.221447917s)
Sep  5 20:37:15.868: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Bound (10.332556141s)
Sep  5 20:37:20.980: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Bound (15.444976014s)
Sep  5 20:37:26.091: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Bound (20.555889024s)
Sep  5 20:37:31.202: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Bound (25.666386051s)
Sep  5 20:37:36.314: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Failed (30.778830518s)
Sep  5 20:37:41.427: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Failed (35.891923331s)
Sep  5 20:37:46.541: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Failed (41.00605754s)
Sep  5 20:37:51.658: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Failed (46.122499968s)
Sep  5 20:37:56.768: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Failed (51.232443869s)
Sep  5 20:38:01.882: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc found and phase=Failed (56.346286507s)
Sep  5 20:38:06.995: INFO: PersistentVolume pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc was removed
Sep  5 20:38:06.995: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  5 20:38:07.107: INFO: Claim "azuredisk-9336" in namespace "pvc-dqshn" doesn't exist in the system
Sep  5 20:38:07.108: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-jhw8j
Sep  5 20:38:07.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9336" for this suite.
... skipping 59 lines ...
Sep  5 20:39:42.911: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Bound (5.221849537s)
Sep  5 20:39:48.022: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Bound (10.332838899s)
Sep  5 20:39:53.136: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Bound (15.447097433s)
Sep  5 20:39:58.246: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Bound (20.557240918s)
Sep  5 20:40:03.358: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Bound (25.66946456s)
Sep  5 20:40:08.472: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Bound (30.783255728s)
Sep  5 20:40:13.586: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Failed (35.897225331s)
Sep  5 20:40:18.700: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Failed (41.010867139s)
Sep  5 20:40:23.810: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Failed (46.120915648s)
Sep  5 20:40:28.925: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe found and phase=Failed (51.235814389s)
Sep  5 20:40:34.037: INFO: PersistentVolume pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe was removed
Sep  5 20:40:34.043: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2205 to be removed
Sep  5 20:40:34.152: INFO: Claim "azuredisk-2205" in namespace "pvc-zgfkq" doesn't exist in the system
Sep  5 20:40:34.152: INFO: deleting StorageClass azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-vdbtb
Sep  5 20:40:34.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2205" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 20:40:57.951: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-tdw65" in namespace "azuredisk-1387" to be "Succeeded or Failed"
Sep  5 20:40:58.073: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 122.020907ms
Sep  5 20:41:00.185: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233857324s
Sep  5 20:41:02.301: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350042552s
Sep  5 20:41:04.419: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467199362s
Sep  5 20:41:06.537: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585999697s
Sep  5 20:41:08.654: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 10.70283935s
... skipping 5 lines ...
Sep  5 20:41:21.358: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 23.406942345s
Sep  5 20:41:23.475: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Pending", Reason="", readiness=false. Elapsed: 25.523229499s
Sep  5 20:41:25.591: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Running", Reason="", readiness=true. Elapsed: 27.639442128s
Sep  5 20:41:27.707: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Running", Reason="", readiness=false. Elapsed: 29.755725806s
Sep  5 20:41:29.824: INFO: Pod "azuredisk-volume-tester-tdw65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.872663489s
STEP: Saw pod success
Sep  5 20:41:29.824: INFO: Pod "azuredisk-volume-tester-tdw65" satisfied condition "Succeeded or Failed"
Sep  5 20:41:29.824: INFO: deleting Pod "azuredisk-1387"/"azuredisk-volume-tester-tdw65"
Sep  5 20:41:29.943: INFO: Pod azuredisk-volume-tester-tdw65 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-tdw65 in namespace azuredisk-1387
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:41:30.377: INFO: deleting PVC "azuredisk-1387"/"pvc-bqhm6"
Sep  5 20:41:30.377: INFO: Deleting PersistentVolumeClaim "pvc-bqhm6"
STEP: waiting for claim's PV "pvc-be3c4196-98b0-4993-b2df-907c3d698924" to be deleted
Sep  5 20:41:30.489: INFO: Waiting up to 10m0s for PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 to get deleted
Sep  5 20:41:30.619: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (129.447596ms)
Sep  5 20:41:35.733: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (5.244256695s)
Sep  5 20:41:40.847: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (10.357969485s)
Sep  5 20:41:45.957: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (15.468178521s)
Sep  5 20:41:51.070: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (20.581041863s)
Sep  5 20:41:56.184: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (25.694748668s)
Sep  5 20:42:01.295: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 found and phase=Failed (30.805616789s)
Sep  5 20:42:06.405: INFO: PersistentVolume pvc-be3c4196-98b0-4993-b2df-907c3d698924 was removed
Sep  5 20:42:06.405: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  5 20:42:06.516: INFO: Claim "azuredisk-1387" in namespace "pvc-bqhm6" doesn't exist in the system
Sep  5 20:42:06.516: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-bv97j
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:42:06.848: INFO: deleting PVC "azuredisk-1387"/"pvc-2wmjv"
Sep  5 20:42:06.848: INFO: Deleting PersistentVolumeClaim "pvc-2wmjv"
STEP: waiting for claim's PV "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" to be deleted
Sep  5 20:42:06.959: INFO: Waiting up to 10m0s for PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 to get deleted
Sep  5 20:42:07.069: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 found and phase=Failed (110.058456ms)
Sep  5 20:42:12.179: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 found and phase=Failed (5.220290812s)
Sep  5 20:42:17.290: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 found and phase=Failed (10.330558829s)
Sep  5 20:42:22.400: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 found and phase=Failed (15.440665716s)
Sep  5 20:42:27.511: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 found and phase=Failed (20.552214575s)
Sep  5 20:42:32.625: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 found and phase=Failed (25.665730714s)
Sep  5 20:42:37.734: INFO: PersistentVolume pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 was removed
Sep  5 20:42:37.734: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  5 20:42:37.844: INFO: Claim "azuredisk-1387" in namespace "pvc-2wmjv" doesn't exist in the system
Sep  5 20:42:37.844: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-f5xl5
STEP: validating provisioned PV
STEP: checking the PV
... skipping 39 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 20:42:50.893: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-f2zqh" in namespace "azuredisk-4547" to be "Succeeded or Failed"
Sep  5 20:42:51.003: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 109.67048ms
Sep  5 20:42:53.116: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222699981s
Sep  5 20:42:55.227: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33392844s
Sep  5 20:42:57.338: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.445030348s
Sep  5 20:42:59.449: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55589512s
Sep  5 20:43:01.559: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.665763251s
Sep  5 20:43:03.671: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 12.777435877s
Sep  5 20:43:05.781: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 14.887844955s
Sep  5 20:43:07.897: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Pending", Reason="", readiness=false. Elapsed: 17.003646532s
Sep  5 20:43:10.013: INFO: Pod "azuredisk-volume-tester-f2zqh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.120138496s
STEP: Saw pod success
Sep  5 20:43:10.014: INFO: Pod "azuredisk-volume-tester-f2zqh" satisfied condition "Succeeded or Failed"
Sep  5 20:43:10.014: INFO: deleting Pod "azuredisk-4547"/"azuredisk-volume-tester-f2zqh"
Sep  5 20:43:10.134: INFO: Pod azuredisk-volume-tester-f2zqh has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.079547 seconds, 1.2GB/s
hello world

STEP: Deleting pod azuredisk-volume-tester-f2zqh in namespace azuredisk-4547
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:43:10.474: INFO: deleting PVC "azuredisk-4547"/"pvc-sstpt"
Sep  5 20:43:10.474: INFO: Deleting PersistentVolumeClaim "pvc-sstpt"
STEP: waiting for claim's PV "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" to be deleted
Sep  5 20:43:10.585: INFO: Waiting up to 10m0s for PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde to get deleted
Sep  5 20:43:10.694: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (109.231222ms)
Sep  5 20:43:15.807: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (5.222336045s)
Sep  5 20:43:20.921: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (10.336514408s)
Sep  5 20:43:26.031: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (15.44621143s)
Sep  5 20:43:31.143: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (20.557699561s)
Sep  5 20:43:36.254: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (25.669263193s)
Sep  5 20:43:41.367: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (30.782571154s)
Sep  5 20:43:46.478: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde found and phase=Failed (35.892638633s)
Sep  5 20:43:51.591: INFO: PersistentVolume pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde was removed
Sep  5 20:43:51.591: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-4547 to be removed
Sep  5 20:43:51.699: INFO: Claim "azuredisk-4547" in namespace "pvc-sstpt" doesn't exist in the system
Sep  5 20:43:51.699: INFO: deleting StorageClass azuredisk-4547-kubernetes.io-azure-disk-dynamic-sc-rdjbd
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 20:44:07.973: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-7dv5s" in namespace "azuredisk-7578" to be "Succeeded or Failed"
Sep  5 20:44:08.088: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 114.515014ms
Sep  5 20:44:10.197: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224230529s
Sep  5 20:44:12.312: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.339391137s
Sep  5 20:44:14.427: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454083426s
Sep  5 20:44:16.542: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.569274903s
Sep  5 20:44:18.658: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.685133146s
... skipping 4 lines ...
Sep  5 20:44:29.239: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 21.265707633s
Sep  5 20:44:31.354: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 23.381407416s
Sep  5 20:44:33.470: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 25.497067612s
Sep  5 20:44:35.585: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Pending", Reason="", readiness=false. Elapsed: 27.612299657s
Sep  5 20:44:37.701: INFO: Pod "azuredisk-volume-tester-7dv5s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.727517985s
STEP: Saw pod success
Sep  5 20:44:37.701: INFO: Pod "azuredisk-volume-tester-7dv5s" satisfied condition "Succeeded or Failed"
Sep  5 20:44:37.701: INFO: deleting Pod "azuredisk-7578"/"azuredisk-volume-tester-7dv5s"
Sep  5 20:44:37.815: INFO: Pod azuredisk-volume-tester-7dv5s has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-7dv5s in namespace azuredisk-7578
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:44:38.157: INFO: deleting PVC "azuredisk-7578"/"pvc-z6sq9"
Sep  5 20:44:38.157: INFO: Deleting PersistentVolumeClaim "pvc-z6sq9"
STEP: waiting for claim's PV "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" to be deleted
Sep  5 20:44:38.268: INFO: Waiting up to 10m0s for PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 to get deleted
Sep  5 20:44:38.376: INFO: PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 found and phase=Failed (108.369932ms)
Sep  5 20:44:43.489: INFO: PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 found and phase=Failed (5.220866151s)
Sep  5 20:44:48.597: INFO: PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 found and phase=Failed (10.329467644s)
Sep  5 20:44:53.710: INFO: PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 found and phase=Failed (15.442665212s)
Sep  5 20:44:58.821: INFO: PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 found and phase=Failed (20.553097863s)
Sep  5 20:45:03.933: INFO: PersistentVolume pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 was removed
Sep  5 20:45:03.933: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  5 20:45:04.041: INFO: Claim "azuredisk-7578" in namespace "pvc-z6sq9" doesn't exist in the system
Sep  5 20:45:04.041: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-7pxz6
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:45:04.384: INFO: deleting PVC "azuredisk-7578"/"pvc-hgfdj"
Sep  5 20:45:04.384: INFO: Deleting PersistentVolumeClaim "pvc-hgfdj"
STEP: waiting for claim's PV "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" to be deleted
Sep  5 20:45:04.494: INFO: Waiting up to 10m0s for PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 to get deleted
Sep  5 20:45:04.602: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (108.450152ms)
Sep  5 20:45:09.715: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (5.221091334s)
Sep  5 20:45:14.827: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (10.333542727s)
Sep  5 20:45:19.938: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (15.444071073s)
Sep  5 20:45:25.051: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (20.556933682s)
Sep  5 20:45:30.162: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (25.667889428s)
Sep  5 20:45:35.274: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (30.780304824s)
Sep  5 20:45:40.383: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (35.888591471s)
Sep  5 20:45:45.496: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 found and phase=Failed (41.002520279s)
Sep  5 20:45:50.606: INFO: PersistentVolume pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 was removed
Sep  5 20:45:50.606: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  5 20:45:50.717: INFO: Claim "azuredisk-7578" in namespace "pvc-hgfdj" doesn't exist in the system
Sep  5 20:45:50.717: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-f2sqc
STEP: validating provisioned PV
STEP: checking the PV
... skipping 161 lines ...
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 20:47:14.785: INFO: deleting PVC "azuredisk-8666"/"pvc-jvc92"
Sep  5 20:47:14.785: INFO: Deleting PersistentVolumeClaim "pvc-jvc92"
STEP: waiting for claim's PV "pvc-69a06362-035e-4dd8-a957-f505ed687eac" to be deleted
Sep  5 20:47:14.895: INFO: Waiting up to 10m0s for PersistentVolume pvc-69a06362-035e-4dd8-a957-f505ed687eac to get deleted
Sep  5 20:47:15.004: INFO: PersistentVolume pvc-69a06362-035e-4dd8-a957-f505ed687eac found and phase=Failed (108.466299ms)
Sep  5 20:47:20.115: INFO: PersistentVolume pvc-69a06362-035e-4dd8-a957-f505ed687eac found and phase=Failed (5.219444759s)
Sep  5 20:47:25.223: INFO: PersistentVolume pvc-69a06362-035e-4dd8-a957-f505ed687eac found and phase=Failed (10.327933881s)
Sep  5 20:47:30.336: INFO: PersistentVolume pvc-69a06362-035e-4dd8-a957-f505ed687eac found and phase=Failed (15.440594401s)
Sep  5 20:47:35.450: INFO: PersistentVolume pvc-69a06362-035e-4dd8-a957-f505ed687eac was removed
Sep  5 20:47:35.450: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8666 to be removed
Sep  5 20:47:35.558: INFO: Claim "azuredisk-8666" in namespace "pvc-jvc92" doesn't exist in the system
Sep  5 20:47:35.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8666" for this suite.

... skipping 321 lines ...
I0905 20:23:48.116225       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662409427\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662409426\" (2022-09-05 19:23:45 +0000 UTC to 2023-09-05 19:23:45 +0000 UTC (now=2022-09-05 20:23:48.116191658 +0000 UTC))"
I0905 20:23:48.117013       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662409428\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662409427\" (2022-09-05 19:23:47 +0000 UTC to 2023-09-05 19:23:47 +0000 UTC (now=2022-09-05 20:23:48.116952383 +0000 UTC))"
I0905 20:23:48.115447       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0905 20:23:48.117231       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0905 20:23:48.117290       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0905 20:23:48.117993       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0905 20:23:50.498786       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0905 20:23:50.498832       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0905 20:23:53.649393       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0905 20:23:53.650029       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-06vmzc-control-plane-tgzc4_6df3c3cc-f6b0-4810-ad8a-d0aebe40de45 became leader"
W0905 20:23:53.746265       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0905 20:23:53.747064       1 azure_auth.go:232] Using AzurePublicCloud environment
I0905 20:23:53.747242       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0905 20:23:53.747318       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0905 20:23:53.749116       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0905 20:23:53.749469       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0905 20:23:53.749920       1 reflector.go:219] Starting reflector *v1.Secret (18h38m56.933924678s) from k8s.io/client-go/informers/factory.go:134
I0905 20:23:53.750995       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0905 20:23:53.751316       1 reflector.go:219] Starting reflector *v1.ServiceAccount (18h38m56.933924678s) from k8s.io/client-go/informers/factory.go:134
I0905 20:23:53.751338       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0905 20:23:53.784910       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0905 20:23:53.784944       1 controllermanager.go:562] Starting "replicationcontroller"
I0905 20:23:53.796130       1 controllermanager.go:577] Started "replicationcontroller"
I0905 20:23:53.796258       1 controllermanager.go:562] Starting "deployment"
I0905 20:23:53.796163       1 replica_set.go:186] Starting replicationcontroller controller
I0905 20:23:53.796573       1 shared_informer.go:240] Waiting for caches to sync for ReplicationController
I0905 20:23:53.803222       1 controllermanager.go:577] Started "deployment"
... skipping 75 lines ...
I0905 20:23:53.919689       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0905 20:23:53.919712       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0905 20:23:53.919728       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0905 20:23:53.919749       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0905 20:23:53.919766       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0905 20:23:53.919781       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0905 20:23:53.919819       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0905 20:23:53.919838       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0905 20:23:53.919929       1 controllermanager.go:577] Started "attachdetach"
I0905 20:23:53.919948       1 controllermanager.go:562] Starting "pvc-protection"
I0905 20:23:53.920090       1 attach_detach_controller.go:328] Starting attach detach controller
I0905 20:23:53.920109       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0905 20:23:53.935933       1 controllermanager.go:577] Started "pvc-protection"
... skipping 124 lines ...
I0905 20:23:57.105656       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0905 20:23:57.105678       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0905 20:23:57.105716       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0905 20:23:57.106292       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0905 20:23:57.106380       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0905 20:23:57.106398       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0905 20:23:57.106448       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0905 20:23:57.106470       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0905 20:23:57.106618       1 controllermanager.go:577] Started "persistentvolume-binder"
I0905 20:23:57.106766       1 controllermanager.go:562] Starting "endpointslice"
I0905 20:23:57.106662       1 pv_controller_base.go:308] Starting persistent volume controller
I0905 20:23:57.106925       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0905 20:23:57.257820       1 controllermanager.go:577] Started "endpointslice"
... skipping 312 lines ...
I0905 20:23:58.775359       1 garbagecollector.go:254] synced garbage collector
I0905 20:23:58.802889       1 shared_informer.go:270] caches populated
I0905 20:23:58.802920       1 shared_informer.go:247] Caches are synced for garbage collector 
I0905 20:23:58.802934       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0905 20:24:01.537845       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="100.412µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48350" resp=200
I0905 20:24:04.891857       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
W0905 20:24:04.892348       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-06vmzc-control-plane-tgzc4" does not exist
I0905 20:24:04.892123       1 controller.go:693] Ignoring node capz-06vmzc-control-plane-tgzc4 with Ready condition status False
I0905 20:24:04.892634       1 controller.go:272] Triggering nodeSync
I0905 20:24:04.892751       1 controller.go:291] nodeSync has been triggered
I0905 20:24:04.892879       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0905 20:24:04.892991       1 controller.go:804] Finished updateLoadBalancerHosts
I0905 20:24:04.893131       1 controller.go:731] It took 0.000251229 seconds to finish nodeSyncInternal
... skipping 20 lines ...
I0905 20:24:08.239824       1 certificate_controller.go:82] Adding certificate request csr-f8cvr
I0905 20:24:08.239843       1 certificate_controller.go:173] Finished syncing certificate request "csr-f8cvr" (2.001µs)
I0905 20:24:08.239852       1 certificate_controller.go:82] Adding certificate request csr-f8cvr
I0905 20:24:08.239867       1 certificate_controller.go:173] Finished syncing certificate request "csr-f8cvr" (3.2µs)
I0905 20:24:08.239875       1 certificate_controller.go:82] Adding certificate request csr-f8cvr
I0905 20:24:08.253856       1 certificate_controller.go:173] Finished syncing certificate request "csr-f8cvr" (13.947424ms)
I0905 20:24:08.253905       1 certificate_controller.go:151] Sync csr-f8cvr failed with : recognized csr "csr-f8cvr" as [selfnodeclient nodeclient] but subject access review was not approved
I0905 20:24:08.459979       1 certificate_controller.go:173] Finished syncing certificate request "csr-f8cvr" (5.634215ms)
I0905 20:24:08.460028       1 certificate_controller.go:151] Sync csr-f8cvr failed with : recognized csr "csr-f8cvr" as [selfnodeclient nodeclient] but subject access review was not approved
I0905 20:24:08.864221       1 certificate_controller.go:173] Finished syncing certificate request "csr-f8cvr" (3.756411ms)
I0905 20:24:08.864256       1 certificate_controller.go:151] Sync csr-f8cvr failed with : recognized csr "csr-f8cvr" as [selfnodeclient nodeclient] but subject access review was not approved
I0905 20:24:09.670124       1 certificate_controller.go:173] Finished syncing certificate request "csr-f8cvr" (5.334973ms)
I0905 20:24:09.670166       1 certificate_controller.go:151] Sync csr-f8cvr failed with : recognized csr "csr-f8cvr" as [selfnodeclient nodeclient] but subject access review was not approved
I0905 20:24:10.651664       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:24:11.158012       1 azure_vmss.go:369] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-control-plane-tgzc4), assuming it is managed by availability set: not a vmss instance
I0905 20:24:11.158266       1 azure_vmss.go:369] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-control-plane-tgzc4), assuming it is managed by availability set: not a vmss instance
I0905 20:24:11.158465       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:24:11.158606       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:24:11.172524       1 controller.go:272] Triggering nodeSync
... skipping 34 lines ...
I0905 20:24:11.652754       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-78fcd69978" need=2 creating=2
I0905 20:24:11.653050       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0905 20:24:11.663831       1 endpoints_controller.go:557] Update endpoints for kube-system/kube-dns, ready: 0 not ready: 0
I0905 20:24:11.664610       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0905 20:24:11.671647       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-05 20:24:11.652318155 +0000 UTC m=+26.046060544 - now: 2022-09-05 20:24:11.671636859 +0000 UTC m=+26.065379148]
I0905 20:24:11.678912       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="45.409609ms"
I0905 20:24:11.678945       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0905 20:24:11.678982       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-05 20:24:11.678964119 +0000 UTC m=+26.072706408"
I0905 20:24:11.679728       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-05 20:24:11 +0000 UTC - now: 2022-09-05 20:24:11.679722897 +0000 UTC m=+26.073465286]
I0905 20:24:11.700637       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0905 20:24:11.700792       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="21.813862ms"
I0905 20:24:11.700826       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-05 20:24:11.700806084 +0000 UTC m=+26.094548473"
I0905 20:24:11.701549       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-05 20:24:11 +0000 UTC - now: 2022-09-05 20:24:11.70154116 +0000 UTC m=+26.095283549]
... skipping 235 lines ...
I0905 20:24:23.528944       1 endpoints_controller.go:387] Finished syncing service "kube-system/metrics-server" endpoints. (50.104µs)
I0905 20:24:23.529623       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/metrics-server-8c95fb79b-ffn28"
I0905 20:24:23.539236       1 controller_utils.go:581] Controller metrics-server-8c95fb79b created pod metrics-server-8c95fb79b-ffn28
I0905 20:24:23.539522       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-8c95fb79b, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0905 20:24:23.540235       1 event.go:291] "Event occurred" object="kube-system/metrics-server-8c95fb79b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-8c95fb79b-ffn28"
I0905 20:24:23.543015       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="48.420368ms"
I0905 20:24:23.543275       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0905 20:24:23.543520       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-05 20:24:23.543446521 +0000 UTC m=+37.937188810"
I0905 20:24:23.544505       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-05 20:24:23 +0000 UTC - now: 2022-09-05 20:24:23.544497912 +0000 UTC m=+37.938240201]
I0905 20:24:23.547234       1 endpointslice_controller.go:319] Finished syncing service "kube-system/metrics-server" endpoint slices. (18.035753ms)
I0905 20:24:23.554573       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-8c95fb79b"
I0905 20:24:23.554996       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (43.581751ms)
I0905 20:24:23.555208       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bdb49dde7e3ec0, ext:37905332369, loc:(*time.Location)(0x751a1a0)}}
... skipping 54 lines ...
I0905 20:24:26.392745       1 controller_utils.go:206] Controller kube-system/calico-kube-controllers-969cf87c4 either never recorded expectations, or the ttl expired.
I0905 20:24:26.392817       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bdb49e9769d7cb, ext:40786554880, loc:(*time.Location)(0x751a1a0)}}
I0905 20:24:26.392914       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/calico-kube-controllers-969cf87c4" need=1 creating=1
I0905 20:24:26.396655       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0905 20:24:26.396838       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-05 20:24:26.388999576 +0000 UTC m=+40.782741965 - now: 2022-09-05 20:24:26.396830023 +0000 UTC m=+40.790572412]
I0905 20:24:26.400088       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="17.688661ms"
I0905 20:24:26.400310       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0905 20:24:26.400545       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-05 20:24:26.400521528 +0000 UTC m=+40.794263817"
I0905 20:24:26.401448       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/calico-kube-controllers-969cf87c4-fwvmw"
I0905 20:24:26.401736       1 controller_utils.go:581] Controller calico-kube-controllers-969cf87c4 created pod calico-kube-controllers-969cf87c4-fwvmw
I0905 20:24:26.402023       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0905 20:24:26.401369       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-969cf87c4-fwvmw" podUID=d6b3749c-9b99-4e9e-9b67-814481d1bf7d
I0905 20:24:26.401387       1 disruption.go:415] addPod called on pod "calico-kube-controllers-969cf87c4-fwvmw"
... skipping 106 lines ...
I0905 20:24:28.436861       1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0905 20:24:28.436831       1 reflector.go:219] Starting reflector *v1.PartialObjectMetadata (16h16m24.181542158s) from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0905 20:24:28.437102       1 reflector.go:255] Listing and watching *v1.PartialObjectMetadata from k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0905 20:24:28.537428       1 shared_informer.go:270] caches populated
I0905 20:24:28.537480       1 shared_informer.go:247] Caches are synced for resource quota 
I0905 20:24:28.537514       1 resource_quota_controller.go:458] synced quota controller
W0905 20:24:28.798787       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0905 20:24:28.799078       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0905 20:24:28.799101       1 garbagecollector.go:219] reset restmapper
E0905 20:24:28.807202       1 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0905 20:24:28.814302       1 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0905 20:24:28.815199       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=clusterinformations", kind "crd.projectcalico.org/v1, Kind=ClusterInformation"
I0905 20:24:28.815390       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=networkpolicies", kind "crd.projectcalico.org/v1, Kind=NetworkPolicy"
... skipping 206 lines ...
I0905 20:24:47.338911       1 replica_set.go:443] Pod coredns-78fcd69978-jnw6t updated, objectMeta {Name:coredns-78fcd69978-jnw6t GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:19593cf8-9e27-4dd2-846a-7aeb35c0eaab ResourceVersion:655 Generation:0 CreationTimestamp:2022-09-05 20:24:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:e5c115d7-2e46-4464-b4fb-284c57bd71c3 Controller:0xc001cfcac7 BlockOwnerDeletion:0xc001cfcac8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 20:24:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5c115d7-2e46-4464-b4fb-284c57bd71c3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-05 20:24:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-78fcd69978-jnw6t GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:19593cf8-9e27-4dd2-846a-7aeb35c0eaab ResourceVersion:664 Generation:0 CreationTimestamp:2022-09-05 20:24:11 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:e5c115d7-2e46-4464-b4fb-284c57bd71c3 Controller:0xc00104156f BlockOwnerDeletion:0xc0010415a0}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 20:24:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5c115d7-2e46-4464-b4fb-284c57bd71c3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-05 20:24:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-05 20:24:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0905 20:24:47.339083       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0bdb49ae6e74d3a, ext:26046435183, loc:(*time.Location)(0x751a1a0)}}
I0905 20:24:47.339187       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (109.603µs)
I0905 20:24:47.339354       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-jnw6t"
I0905 20:24:47.339386       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-jnw6t, PodDisruptionBudget controller will avoid syncing.
I0905 20:24:47.339393       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-jnw6t"
I0905 20:24:48.166955       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-06vmzc-control-plane-tgzc4 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-05 20:24:27 +0000 UTC,LastTransitionTime:2022-09-05 20:24:04 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-05 20:24:47 +0000 UTC,LastTransitionTime:2022-09-05 20:24:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0905 20:24:48.167068       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-control-plane-tgzc4 ReadyCondition updated. Updating timestamp.
I0905 20:24:48.167097       1 node_lifecycle_controller.go:893] Node capz-06vmzc-control-plane-tgzc4 is healthy again, removing all taints
I0905 20:24:48.167115       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0905 20:24:52.664015       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:24:52.692571       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:24:52.869186       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
... skipping 90 lines ...
I0905 20:24:59.449650       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0905 20:24:59.453592       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="37.045711ms"
I0905 20:24:59.453770       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-05 20:24:59.453727255 +0000 UTC m=+73.847469644"
I0905 20:24:59.455407       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-05 20:24:59 +0000 UTC - now: 2022-09-05 20:24:59.455398965 +0000 UTC m=+73.849141354]
I0905 20:24:59.455559       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0905 20:24:59.455632       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.890711ms"
W0905 20:24:59.477411       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0905 20:25:00.344346       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (242.403µs)
I0905 20:25:01.311997       1 replica_set.go:443] Pod metrics-server-8c95fb79b-ffn28 updated, objectMeta {Name:metrics-server-8c95fb79b-ffn28 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:a3c58c86-f1fc-4b42-88e8-839b0284920d ResourceVersion:663 Generation:0 CreationTimestamp:2022-09-05 20:24:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:fe7cae96-53de-4414-9431-a63816c9d9b9 Controller:0xc001cb9e90 BlockOwnerDeletion:0xc001cb9e91}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe7cae96-53de-4414-9431-a63816c9d9b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-05 20:24:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-8c95fb79b-ffn28 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:a3c58c86-f1fc-4b42-88e8-839b0284920d ResourceVersion:729 Generation:0 CreationTimestamp:2022-09-05 20:24:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:14081e42551f85ba784cac2c01555ffcb4085c19382d50af91274775465d8fd3 cni.projectcalico.org/podIP:192.168.100.195/32 cni.projectcalico.org/podIPs:192.168.100.195/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:fe7cae96-53de-4414-9431-a63816c9d9b9 Controller:0xc0026946f7 BlockOwnerDeletion:0xc0026946f8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe7cae96-53de-4414-9431-a63816c9d9b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-05 20:24:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-05 20:25:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0905 20:25:01.312199       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bdb49dde7e3ec0, ext:37905332369, loc:(*time.Location)(0x751a1a0)}}
I0905 20:25:01.312295       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/metrics-server-8c95fb79b" (103.101µs)
I0905 20:25:01.312464       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-ffn28"
I0905 20:25:01.312498       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-ffn28, PodDisruptionBudget controller will avoid syncing.
... skipping 84 lines ...
I0905 20:25:23.080977       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.804µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48838" resp=200
I0905 20:25:28.129032       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:25:28.138564       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:25:28.210494       1 pv_controller_base.go:528] resyncing PV controller
E0905 20:25:28.570422       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0905 20:25:28.570600       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0905 20:25:29.523201       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0905 20:25:33.081650       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="95.606µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35060" resp=200
I0905 20:25:37.990913       1 disruption.go:427] updatePod called on pod "metrics-server-8c95fb79b-ffn28"
I0905 20:25:37.990971       1 disruption.go:490] No PodDisruptionBudgets found for pod metrics-server-8c95fb79b-ffn28, PodDisruptionBudget controller will avoid syncing.
I0905 20:25:37.990979       1 disruption.go:430] No matching pdb for pod "metrics-server-8c95fb79b-ffn28"
I0905 20:25:37.990863       1 replica_set.go:443] Pod metrics-server-8c95fb79b-ffn28 updated, objectMeta {Name:metrics-server-8c95fb79b-ffn28 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:a3c58c86-f1fc-4b42-88e8-839b0284920d ResourceVersion:772 Generation:0 CreationTimestamp:2022-09-05 20:24:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:14081e42551f85ba784cac2c01555ffcb4085c19382d50af91274775465d8fd3 cni.projectcalico.org/podIP:192.168.100.195/32 cni.projectcalico.org/podIPs:192.168.100.195/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:fe7cae96-53de-4414-9431-a63816c9d9b9 Controller:0xc0028f6730 BlockOwnerDeletion:0xc0028f6731}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe7cae96-53de-4414-9431-a63816c9d9b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-05 20:25:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-05 20:25:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.100.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-8c95fb79b-ffn28 GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:a3c58c86-f1fc-4b42-88e8-839b0284920d ResourceVersion:807 Generation:0 CreationTimestamp:2022-09-05 20:24:23 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:14081e42551f85ba784cac2c01555ffcb4085c19382d50af91274775465d8fd3 cni.projectcalico.org/podIP:192.168.100.195/32 cni.projectcalico.org/podIPs:192.168.100.195/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:fe7cae96-53de-4414-9431-a63816c9d9b9 Controller:0xc000c35740 BlockOwnerDeletion:0xc000c35741}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fe7cae96-53de-4414-9431-a63816c9d9b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-05 20:24:23 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-05 20:25:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-05 20:25:37 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.100.195\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0905 20:25:37.991055       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0bdb49dde7e3ec0, ext:37905332369, loc:(*time.Location)(0x751a1a0)}}
... skipping 91 lines ...
I0905 20:26:05.905153       1 certificate_controller.go:173] Finished syncing certificate request "csr-5rpwb" (2.089µs)
I0905 20:26:05.905305       1 certificate_controller.go:87] Updating certificate request csr-5rpwb
I0905 20:26:05.905328       1 certificate_controller.go:173] Finished syncing certificate request "csr-5rpwb" (1.69µs)
I0905 20:26:05.905577       1 certificate_controller.go:173] Finished syncing certificate request "csr-5rpwb" (12.763776ms)
I0905 20:26:05.905618       1 certificate_controller.go:173] Finished syncing certificate request "csr-5rpwb" (1.889µs)
I0905 20:26:08.669171       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-bv5pc"
W0905 20:26:08.671692       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-06vmzc-md-0-bv5pc" does not exist
I0905 20:26:08.669517       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-06vmzc-md-0-bv5pc}
I0905 20:26:08.669552       1 controller.go:693] Ignoring node capz-06vmzc-md-0-bv5pc with Ready condition status False
I0905 20:26:08.670163       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bdb49db5619d61, ext:37289332018, loc:(*time.Location)(0x751a1a0)}}
I0905 20:26:08.671581       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bdb4a6473cae45, ext:71515159674, loc:(*time.Location)(0x751a1a0)}}
I0905 20:26:08.672658       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bdb4b82817d92c, ext:143066393953, loc:(*time.Location)(0x751a1a0)}}
I0905 20:26:08.672747       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-06vmzc-md-0-bv5pc], creating 1
... skipping 85 lines ...
I0905 20:26:10.529617       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bdb4b89f914238, ext:144923355657, loc:(*time.Location)(0x751a1a0)}}
I0905 20:26:10.529678       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0905 20:26:10.529728       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0905 20:26:10.529791       1 daemon_controller.go:1102] Updating daemon set status
I0905 20:26:10.529878       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (2.642003ms)
I0905 20:26:10.988839       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
W0905 20:26:10.988876       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-06vmzc-md-0-dbrp2" does not exist
I0905 20:26:10.988899       1 controller.go:693] Ignoring node capz-06vmzc-md-0-bv5pc with Ready condition status False
I0905 20:26:10.988908       1 controller.go:693] Ignoring node capz-06vmzc-md-0-dbrp2 with Ready condition status False
I0905 20:26:10.988919       1 controller.go:272] Triggering nodeSync
I0905 20:26:10.988955       1 controller.go:291] nodeSync has been triggered
I0905 20:26:10.988964       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0905 20:26:10.992580       1 controller.go:804] Finished updateLoadBalancerHosts
... skipping 450 lines ...
I0905 20:26:41.299843       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-w8wvr, PodDisruptionBudget controller will avoid syncing.
I0905 20:26:41.299851       1 disruption.go:430] No matching pdb for pod "calico-node-w8wvr"
I0905 20:26:41.304994       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:26:41.516794       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:26:43.081727       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.504µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34648" resp=200
I0905 20:26:43.132437       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:26:43.194163       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-06vmzc-md-0-bv5pc transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-05 20:26:18 +0000 UTC,LastTransitionTime:2022-09-05 20:26:08 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-05 20:26:38 +0000 UTC,LastTransitionTime:2022-09-05 20:26:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0905 20:26:43.194309       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-bv5pc ReadyCondition updated. Updating timestamp.
I0905 20:26:43.218078       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-bv5pc"
I0905 20:26:43.218556       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-06vmzc-md-0-bv5pc}
I0905 20:26:43.218582       1 taint_manager.go:440] "Updating known taints on node" node="capz-06vmzc-md-0-bv5pc" taints=[]
I0905 20:26:43.218600       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-06vmzc-md-0-bv5pc"
I0905 20:26:43.222834       1 node_lifecycle_controller.go:893] Node capz-06vmzc-md-0-bv5pc is healthy again, removing all taints
I0905 20:26:43.222918       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-06vmzc-md-0-dbrp2 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-05 20:26:21 +0000 UTC,LastTransitionTime:2022-09-05 20:26:10 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-05 20:26:41 +0000 UTC,LastTransitionTime:2022-09-05 20:26:41 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0905 20:26:43.223201       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:26:43.223138       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:26:43.232097       1 node_lifecycle_controller.go:893] Node capz-06vmzc-md-0-dbrp2 is healthy again, removing all taints
I0905 20:26:43.234144       1 node_lifecycle_controller.go:1214] Controller detected that zone westeurope::0 is now in state Normal.
I0905 20:26:43.233547       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:26:43.233985       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-06vmzc-md-0-dbrp2}
... skipping 125 lines ...
I0905 20:28:26.841461       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (170.530497ms)
I0905 20:28:26.844051       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (2.4µs)
I0905 20:28:26.970638       1 publisher.go:186] Finished syncing namespace "azuredisk-5194" (11.391822ms)
I0905 20:28:26.972991       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (14.025187ms)
I0905 20:28:27.708805       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0905 20:28:27.727171       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-bm5w9, uid 480323b3-e35d-4667-9d2e-93612e8579af, event type delete
E0905 20:28:27.740008       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-j97m4" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0905 20:28:27.744015       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0905 20:28:27.744058       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid cf7f6f09-fda0-4092-8861-2461b1145aed, event type delete
I0905 20:28:27.744089       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (1.7µs)
I0905 20:28:27.855992       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid 09afa681-e0ae-4d0e-ac0b-9564b0352b48, event type delete
I0905 20:28:27.858603       1 publisher.go:186] Finished syncing namespace "azuredisk-2540" (2.558662ms)
I0905 20:28:27.877472       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.701µs)
... skipping 56 lines ...
I0905 20:28:29.883948       1 publisher.go:186] Finished syncing namespace "azuredisk-5466" (2.593364ms)
I0905 20:28:29.932097       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (2.9µs)
I0905 20:28:29.934091       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5466, estimate: 0, errors: <nil>
I0905 20:28:29.943512       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5466" (169.473831ms)
I0905 20:28:30.814702       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0905 20:28:30.973178       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-7jcvq, uid c310a769-cae3-4b42-bc58-4a1af7fb9705, event type delete
E0905 20:28:31.000231       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-xlqhq" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0905 20:28:31.013205       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2790, name default, uid 847a0e22-4f52-4e47-8d44-a34e9843fb3f, event type delete
I0905 20:28:31.013375       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
I0905 20:28:31.013503       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (2.8µs)
I0905 20:28:31.018693       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2790, name kube-root-ca.crt, uid 87d6b63d-2f5b-4977-b2a2-135fc3cb386c, event type delete
I0905 20:28:31.021562       1 publisher.go:186] Finished syncing namespace "azuredisk-2790" (2.821678ms)
I0905 20:28:31.092507       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (2.6µs)
... skipping 81 lines ...
I0905 20:28:32.843521       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" lun 0 to node "capz-06vmzc-md-0-bv5pc".
I0905 20:28:32.843612       1 azure_controller_standard.go:93] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - attach disk(capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) with DiskEncryptionSetID()
I0905 20:28:32.878096       1 namespace_controller.go:185] Namespace has been deleted azuredisk-2540
I0905 20:28:32.878340       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (288.718µs)
I0905 20:28:32.885163       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5194
I0905 20:28:32.912462       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-4pmfn, uid a653e581-e3d1-42fa-b27b-890d80fae76e, event type delete
E0905 20:28:32.930591       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-trtwv" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0905 20:28:33.024485       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0905 20:28:33.024576       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid c17685b7-4c9f-4faf-8b29-f4e948dfd637, event type delete
I0905 20:28:33.024605       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (2.601µs)
I0905 20:28:33.030094       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5194, name kube-root-ca.crt, uid 9231d09c-899d-4ce6-b88d-8d44185c55a1, event type delete
I0905 20:28:33.033232       1 publisher.go:186] Finished syncing namespace "azuredisk-5194" (2.99979ms)
I0905 20:28:33.042647       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (1.8µs)
... skipping 126 lines ...
I0905 20:28:53.451740       1 pv_controller.go:1108] reclaimVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: policy is Delete
I0905 20:28:53.451753       1 pv_controller.go:1752] scheduleOperation[delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]]
I0905 20:28:53.451762       1 pv_controller.go:1763] operation "delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]" is already running, skipping
I0905 20:28:53.451794       1 pv_controller.go:1231] deleteVolumeOperation [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2] started
I0905 20:28:53.453705       1 pv_controller.go:1340] isVolumeReleased[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is released
I0905 20:28:53.453873       1 pv_controller.go:1404] doDeleteVolume [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]
I0905 20:28:53.479963       1 pv_controller.go:1259] deletion of volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:28:53.479985       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: set phase Failed
I0905 20:28:53.479993       1 pv_controller.go:858] updating PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: set phase Failed
I0905 20:28:53.483746       1 pv_protection_controller.go:205] Got event on PV pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2
I0905 20:28:53.483748       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" with version 1349
I0905 20:28:53.483916       1 pv_controller.go:879] volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" entered phase "Failed"
I0905 20:28:53.483968       1 pv_controller.go:901] volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
E0905 20:28:53.484149       1 goroutinemap.go:150] Operation for "delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]" failed. No retries permitted until 2022-09-05 20:28:53.984127241 +0000 UTC m=+308.377869630 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:28:53.483772       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" with version 1349
I0905 20:28:53.484459       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: phase: Failed, bound to: "azuredisk-1353/pvc-x4srj (uid: edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2)", boundByController: true
I0905 20:28:53.484633       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is bound to claim azuredisk-1353/pvc-x4srj
I0905 20:28:53.484783       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: claim azuredisk-1353/pvc-x4srj not found
I0905 20:28:53.484797       1 pv_controller.go:1108] reclaimVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: policy is Delete
I0905 20:28:53.484811       1 pv_controller.go:1752] scheduleOperation[delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]]
I0905 20:28:53.484820       1 pv_controller.go:1765] operation "delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]" postponed due to exponential backoff
I0905 20:28:53.484501       1 event.go:291] "Event occurred" object="pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted"
... skipping 6 lines ...
I0905 20:28:58.210177       1 controller.go:291] nodeSync has been triggered
I0905 20:28:58.210204       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0905 20:28:58.210269       1 controller.go:804] Finished updateLoadBalancerHosts
I0905 20:28:58.210293       1 controller.go:731] It took 8.6405e-05 seconds to finish nodeSyncInternal
I0905 20:28:58.229504       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:28:58.229555       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" with version 1349
I0905 20:28:58.229589       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: phase: Failed, bound to: "azuredisk-1353/pvc-x4srj (uid: edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2)", boundByController: true
I0905 20:28:58.229625       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is bound to claim azuredisk-1353/pvc-x4srj
I0905 20:28:58.229645       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: claim azuredisk-1353/pvc-x4srj not found
I0905 20:28:58.229655       1 pv_controller.go:1108] reclaimVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: policy is Delete
I0905 20:28:58.229671       1 pv_controller.go:1752] scheduleOperation[delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]]
I0905 20:28:58.229698       1 pv_controller.go:1231] deleteVolumeOperation [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2] started
I0905 20:28:58.231868       1 gc_controller.go:161] GC'ing orphaned
I0905 20:28:58.231904       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:28:58.232541       1 pv_controller.go:1340] isVolumeReleased[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is released
I0905 20:28:58.232570       1 pv_controller.go:1404] doDeleteVolume [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]
I0905 20:28:58.298397       1 pv_controller.go:1259] deletion of volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:28:58.298424       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: set phase Failed
I0905 20:28:58.298436       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: phase Failed already set
E0905 20:28:58.298474       1 goroutinemap.go:150] Operation for "delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]" failed. No retries permitted until 2022-09-05 20:28:59.298445794 +0000 UTC m=+313.692188083 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:28:58.344090       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0905 20:28:58.723767       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0905 20:28:58.920150       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-bv5pc"
I0905 20:28:58.920542       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 to the node "capz-06vmzc-md-0-bv5pc" mounted false
I0905 20:28:59.025447       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-06vmzc-md-0-bv5pc" succeeded. VolumesAttached: []
I0905 20:28:59.025563       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-bv5pc"
... skipping 14 lines ...
I0905 20:29:10.835389       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2 was detached from node:capz-06vmzc-md-0-bv5pc
I0905 20:29:10.835530       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2") on node "capz-06vmzc-md-0-bv5pc" 
I0905 20:29:13.083587       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="63.804µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:42190" resp=200
I0905 20:29:13.138645       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:29:13.230442       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:29:13.230502       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" with version 1349
I0905 20:29:13.230544       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: phase: Failed, bound to: "azuredisk-1353/pvc-x4srj (uid: edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2)", boundByController: true
I0905 20:29:13.230585       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is bound to claim azuredisk-1353/pvc-x4srj
I0905 20:29:13.230608       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: claim azuredisk-1353/pvc-x4srj not found
I0905 20:29:13.230621       1 pv_controller.go:1108] reclaimVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: policy is Delete
I0905 20:29:13.230639       1 pv_controller.go:1752] scheduleOperation[delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]]
I0905 20:29:13.230675       1 pv_controller.go:1231] deleteVolumeOperation [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2] started
I0905 20:29:13.234668       1 pv_controller.go:1340] isVolumeReleased[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is released
... skipping 5 lines ...
I0905 20:29:18.450729       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2
I0905 20:29:18.450770       1 pv_controller.go:1435] volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" deleted
I0905 20:29:18.450784       1 pv_controller.go:1283] deleteVolumeOperation [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: success
I0905 20:29:18.457033       1 pv_protection_controller.go:205] Got event on PV pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2
I0905 20:29:18.457094       1 pv_protection_controller.go:125] Processing PV pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2
I0905 20:29:18.457404       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" with version 1388
I0905 20:29:18.457439       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: phase: Failed, bound to: "azuredisk-1353/pvc-x4srj (uid: edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2)", boundByController: true
I0905 20:29:18.457465       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: volume is bound to claim azuredisk-1353/pvc-x4srj
I0905 20:29:18.457486       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: claim azuredisk-1353/pvc-x4srj not found
I0905 20:29:18.457495       1 pv_controller.go:1108] reclaimVolume[pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2]: policy is Delete
I0905 20:29:18.457511       1 pv_controller.go:1752] scheduleOperation[delete-pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2[075b195c-6a2a-45c3-baa0-5cc51e435096]]
I0905 20:29:18.457535       1 pv_controller.go:1231] deleteVolumeOperation [pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2] started
I0905 20:29:18.460440       1 pv_controller.go:1243] Volume "pvc-edc30e1a-5b8e-4f4c-b54b-fa6d1d44e4f2" is already being deleted
... skipping 545 lines ...
I0905 20:31:31.454127       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: claim azuredisk-1563/pvc-zdzrv not found
I0905 20:31:31.454139       1 pv_controller.go:1108] reclaimVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: policy is Delete
I0905 20:31:31.454154       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]]
I0905 20:31:31.454167       1 pv_controller.go:1763] operation "delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]" is already running, skipping
I0905 20:31:31.455752       1 pv_controller.go:1340] isVolumeReleased[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: volume is released
I0905 20:31:31.455826       1 pv_controller.go:1404] doDeleteVolume [pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]
I0905 20:31:31.507899       1 pv_controller.go:1259] deletion of volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:31:31.507923       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: set phase Failed
I0905 20:31:31.507933       1 pv_controller.go:858] updating PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: set phase Failed
I0905 20:31:31.511389       1 pv_protection_controller.go:205] Got event on PV pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca
I0905 20:31:31.511581       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" with version 1649
I0905 20:31:31.511963       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: phase: Failed, bound to: "azuredisk-1563/pvc-zdzrv (uid: bb617e6a-6cff-4c4d-a78e-653ae24e39ca)", boundByController: true
I0905 20:31:31.512388       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: volume is bound to claim azuredisk-1563/pvc-zdzrv
I0905 20:31:31.512587       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: claim azuredisk-1563/pvc-zdzrv not found
I0905 20:31:31.512763       1 pv_controller.go:1108] reclaimVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: policy is Delete
I0905 20:31:31.512914       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]]
I0905 20:31:31.513065       1 pv_controller.go:1763] operation "delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]" is already running, skipping
I0905 20:31:31.512556       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" with version 1649
I0905 20:31:31.513355       1 pv_controller.go:879] volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" entered phase "Failed"
I0905 20:31:31.513472       1 pv_controller.go:901] volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:31:31.514049       1 event.go:291] "Event occurred" object="pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted"
E0905 20:31:31.513644       1 goroutinemap.go:150] Operation for "delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]" failed. No retries permitted until 2022-09-05 20:31:32.013606012 +0000 UTC m=+466.407348401 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:31:32.142139       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 17 items received
I0905 20:31:33.081336       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="114.706µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54552" resp=200
I0905 20:31:33.209065       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 13 items received
I0905 20:31:38.126642       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0905 20:31:38.242269       1 gc_controller.go:161] GC'ing orphaned
I0905 20:31:38.242312       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 10 lines ...
I0905 20:31:39.158971       1 azure_controller_standard.go:166] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca)
I0905 20:31:43.082240       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.204µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58894" resp=200
I0905 20:31:43.145029       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:31:43.148319       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0905 20:31:43.238729       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:31:43.238839       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" with version 1649
I0905 20:31:43.238886       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: phase: Failed, bound to: "azuredisk-1563/pvc-zdzrv (uid: bb617e6a-6cff-4c4d-a78e-653ae24e39ca)", boundByController: true
I0905 20:31:43.238924       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: volume is bound to claim azuredisk-1563/pvc-zdzrv
I0905 20:31:43.238947       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: claim azuredisk-1563/pvc-zdzrv not found
I0905 20:31:43.238957       1 pv_controller.go:1108] reclaimVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: policy is Delete
I0905 20:31:43.238975       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]]
I0905 20:31:43.239010       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca] started
I0905 20:31:43.242011       1 pv_controller.go:1340] isVolumeReleased[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: volume is released
I0905 20:31:43.242034       1 pv_controller.go:1404] doDeleteVolume [pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]
I0905 20:31:43.242097       1 pv_controller.go:1259] deletion of volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) since it's in attaching or detaching state
I0905 20:31:43.242114       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: set phase Failed
I0905 20:31:43.242126       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: phase Failed already set
E0905 20:31:43.242179       1 goroutinemap.go:150] Operation for "delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]" failed. No retries permitted until 2022-09-05 20:31:44.242135219 +0000 UTC m=+478.635877508 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) since it's in attaching or detaching state
I0905 20:31:43.292768       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-bv5pc ReadyCondition updated. Updating timestamp.
I0905 20:31:50.061692       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) returned with <nil>
I0905 20:31:50.061732       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca) succeeded
I0905 20:31:50.061743       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca was detached from node:capz-06vmzc-md-0-bv5pc
I0905 20:31:50.061778       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca") on node "capz-06vmzc-md-0-bv5pc" 
I0905 20:31:52.461044       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0905 20:31:53.081358       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="113.207µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59544" resp=200
I0905 20:31:57.177056       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0905 20:31:57.760957       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0905 20:31:58.144544       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:31:58.145624       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:31:58.239565       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:31:58.239629       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" with version 1649
I0905 20:31:58.239669       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: phase: Failed, bound to: "azuredisk-1563/pvc-zdzrv (uid: bb617e6a-6cff-4c4d-a78e-653ae24e39ca)", boundByController: true
I0905 20:31:58.239708       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: volume is bound to claim azuredisk-1563/pvc-zdzrv
I0905 20:31:58.239726       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: claim azuredisk-1563/pvc-zdzrv not found
I0905 20:31:58.239736       1 pv_controller.go:1108] reclaimVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: policy is Delete
I0905 20:31:58.239753       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]]
I0905 20:31:58.239781       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca] started
I0905 20:31:58.242746       1 gc_controller.go:161] GC'ing orphaned
... skipping 6 lines ...
I0905 20:32:03.478181       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca
I0905 20:32:03.478216       1 pv_controller.go:1435] volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" deleted
I0905 20:32:03.478231       1 pv_controller.go:1283] deleteVolumeOperation [pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: success
I0905 20:32:03.491378       1 pv_protection_controller.go:205] Got event on PV pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca
I0905 20:32:03.491586       1 pv_protection_controller.go:125] Processing PV pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca
I0905 20:32:03.491498       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" with version 1698
I0905 20:32:03.492258       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: phase: Failed, bound to: "azuredisk-1563/pvc-zdzrv (uid: bb617e6a-6cff-4c4d-a78e-653ae24e39ca)", boundByController: true
I0905 20:32:03.492402       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: volume is bound to claim azuredisk-1563/pvc-zdzrv
I0905 20:32:03.492537       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: claim azuredisk-1563/pvc-zdzrv not found
I0905 20:32:03.492669       1 pv_controller.go:1108] reclaimVolume[pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca]: policy is Delete
I0905 20:32:03.492790       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]]
I0905 20:32:03.492891       1 pv_controller.go:1763] operation "delete-pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca[a4a3fe81-9850-49b0-b2f2-bdc68329242f]" is already running, skipping
I0905 20:32:03.497118       1 pv_controller_base.go:235] volume "pvc-bb617e6a-6cff-4c4d-a78e-653ae24e39ca" deleted
... skipping 281 lines ...
I0905 20:32:30.078995       1 pv_controller.go:1108] reclaimVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: policy is Delete
I0905 20:32:30.079027       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]]
I0905 20:32:30.079042       1 pv_controller.go:1763] operation "delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]" is already running, skipping
I0905 20:32:30.079097       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4818e932-c294-4904-bc10-3f517dd0280d] started
I0905 20:32:30.081610       1 pv_controller.go:1340] isVolumeReleased[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: volume is released
I0905 20:32:30.081634       1 pv_controller.go:1404] doDeleteVolume [pvc-4818e932-c294-4904-bc10-3f517dd0280d]
I0905 20:32:30.126638       1 pv_controller.go:1259] deletion of volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:32:30.126667       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: set phase Failed
I0905 20:32:30.126678       1 pv_controller.go:858] updating PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: set phase Failed
I0905 20:32:30.130676       1 pv_protection_controller.go:205] Got event on PV pvc-4818e932-c294-4904-bc10-3f517dd0280d
I0905 20:32:30.130789       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" with version 1800
I0905 20:32:30.130869       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: phase: Failed, bound to: "azuredisk-7463/pvc-k7wmw (uid: 4818e932-c294-4904-bc10-3f517dd0280d)", boundByController: true
I0905 20:32:30.130938       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: volume is bound to claim azuredisk-7463/pvc-k7wmw
I0905 20:32:30.131006       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: claim azuredisk-7463/pvc-k7wmw not found
I0905 20:32:30.131041       1 pv_controller.go:1108] reclaimVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: policy is Delete
I0905 20:32:30.131101       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]]
I0905 20:32:30.131135       1 pv_controller.go:1763] operation "delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]" is already running, skipping
I0905 20:32:30.132433       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" with version 1800
I0905 20:32:30.132464       1 pv_controller.go:879] volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" entered phase "Failed"
I0905 20:32:30.132477       1 pv_controller.go:901] volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:32:30.132647       1 goroutinemap.go:150] Operation for "delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]" failed. No retries permitted until 2022-09-05 20:32:30.632497599 +0000 UTC m=+525.026239988 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:32:30.133068       1 event.go:291] "Event occurred" object="pvc-4818e932-c294-4904-bc10-3f517dd0280d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
I0905 20:32:31.380695       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:32:31.380730       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d to the node "capz-06vmzc-md-0-dbrp2" mounted false
I0905 20:32:31.465888       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:32:31.466882       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d to the node "capz-06vmzc-md-0-dbrp2" mounted false
I0905 20:32:31.466506       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-06vmzc-md-0-dbrp2" succeeded. VolumesAttached: []
... skipping 11 lines ...
I0905 20:32:41.401091       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:32:41.401562       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d to the node "capz-06vmzc-md-0-dbrp2" mounted false
I0905 20:32:43.089023       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="76.704µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36280" resp=200
I0905 20:32:43.148459       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:32:43.242254       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:32:43.242354       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" with version 1800
I0905 20:32:43.242430       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: phase: Failed, bound to: "azuredisk-7463/pvc-k7wmw (uid: 4818e932-c294-4904-bc10-3f517dd0280d)", boundByController: true
I0905 20:32:43.242481       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: volume is bound to claim azuredisk-7463/pvc-k7wmw
I0905 20:32:43.242514       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: claim azuredisk-7463/pvc-k7wmw not found
I0905 20:32:43.242530       1 pv_controller.go:1108] reclaimVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: policy is Delete
I0905 20:32:43.242551       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]]
I0905 20:32:43.242611       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4818e932-c294-4904-bc10-3f517dd0280d] started
I0905 20:32:43.249056       1 pv_controller.go:1340] isVolumeReleased[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: volume is released
I0905 20:32:43.249079       1 pv_controller.go:1404] doDeleteVolume [pvc-4818e932-c294-4904-bc10-3f517dd0280d]
I0905 20:32:43.249115       1 pv_controller.go:1259] deletion of volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) since it's in attaching or detaching state
I0905 20:32:43.249130       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: set phase Failed
I0905 20:32:43.249144       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: phase Failed already set
E0905 20:32:43.249180       1 goroutinemap.go:150] Operation for "delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]" failed. No retries permitted until 2022-09-05 20:32:44.249158862 +0000 UTC m=+538.642901151 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) since it's in attaching or detaching state
I0905 20:32:43.301661       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:32:45.170589       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 14 items received
I0905 20:32:45.329461       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:32:47.016655       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) returned with <nil>
I0905 20:32:47.016713       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d) succeeded
I0905 20:32:47.016725       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d was detached from node:capz-06vmzc-md-0-dbrp2
I0905 20:32:47.016752       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d") on node "capz-06vmzc-md-0-dbrp2" 
I0905 20:32:53.081137       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="74.104µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38062" resp=200
I0905 20:32:55.124913       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 19 items received
I0905 20:32:58.145355       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:32:58.149589       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:32:58.242682       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:32:58.242748       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" with version 1800
I0905 20:32:58.242785       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: phase: Failed, bound to: "azuredisk-7463/pvc-k7wmw (uid: 4818e932-c294-4904-bc10-3f517dd0280d)", boundByController: true
I0905 20:32:58.242843       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: volume is bound to claim azuredisk-7463/pvc-k7wmw
I0905 20:32:58.242863       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: claim azuredisk-7463/pvc-k7wmw not found
I0905 20:32:58.242871       1 pv_controller.go:1108] reclaimVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: policy is Delete
I0905 20:32:58.242889       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]]
I0905 20:32:58.242917       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4818e932-c294-4904-bc10-3f517dd0280d] started
I0905 20:32:58.244679       1 gc_controller.go:161] GC'ing orphaned
... skipping 6 lines ...
I0905 20:33:03.493460       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-4818e932-c294-4904-bc10-3f517dd0280d
I0905 20:33:03.493507       1 pv_controller.go:1435] volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" deleted
I0905 20:33:03.493525       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4818e932-c294-4904-bc10-3f517dd0280d]: success
I0905 20:33:03.506225       1 pv_protection_controller.go:205] Got event on PV pvc-4818e932-c294-4904-bc10-3f517dd0280d
I0905 20:33:03.506261       1 pv_protection_controller.go:125] Processing PV pvc-4818e932-c294-4904-bc10-3f517dd0280d
I0905 20:33:03.506631       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4818e932-c294-4904-bc10-3f517dd0280d" with version 1851
I0905 20:33:03.506667       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: phase: Failed, bound to: "azuredisk-7463/pvc-k7wmw (uid: 4818e932-c294-4904-bc10-3f517dd0280d)", boundByController: true
I0905 20:33:03.506697       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: volume is bound to claim azuredisk-7463/pvc-k7wmw
I0905 20:33:03.506716       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: claim azuredisk-7463/pvc-k7wmw not found
I0905 20:33:03.506725       1 pv_controller.go:1108] reclaimVolume[pvc-4818e932-c294-4904-bc10-3f517dd0280d]: policy is Delete
I0905 20:33:03.506741       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]]
I0905 20:33:03.506749       1 pv_controller.go:1763] operation "delete-pvc-4818e932-c294-4904-bc10-3f517dd0280d[63549220-2dca-41a1-ad0b-09bb590b012f]" is already running, skipping
I0905 20:33:03.512241       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-4818e932-c294-4904-bc10-3f517dd0280d
... skipping 281 lines ...
I0905 20:33:29.986225       1 pv_controller.go:1752] scheduleOperation[delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]]
I0905 20:33:29.986241       1 pv_controller.go:1763] operation "delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]" is already running, skipping
I0905 20:33:29.986362       1 pv_controller.go:1231] deleteVolumeOperation [pvc-58d5db18-b486-467c-8e46-4fc75532e379] started
I0905 20:33:29.985478       1 pv_protection_controller.go:205] Got event on PV pvc-58d5db18-b486-467c-8e46-4fc75532e379
I0905 20:33:29.988564       1 pv_controller.go:1340] isVolumeReleased[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: volume is released
I0905 20:33:29.988582       1 pv_controller.go:1404] doDeleteVolume [pvc-58d5db18-b486-467c-8e46-4fc75532e379]
I0905 20:33:29.988704       1 pv_controller.go:1259] deletion of volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) since it's in attaching or detaching state
I0905 20:33:29.988749       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: set phase Failed
I0905 20:33:29.988819       1 pv_controller.go:858] updating PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: set phase Failed
I0905 20:33:29.992215       1 pv_protection_controller.go:205] Got event on PV pvc-58d5db18-b486-467c-8e46-4fc75532e379
I0905 20:33:29.992215       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" with version 1947
I0905 20:33:29.992251       1 pv_controller.go:879] volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" entered phase "Failed"
I0905 20:33:29.992264       1 pv_controller.go:901] volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) since it's in attaching or detaching state
E0905 20:33:29.992303       1 goroutinemap.go:150] Operation for "delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]" failed. No retries permitted until 2022-09-05 20:33:30.492284313 +0000 UTC m=+584.886026702 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) since it's in attaching or detaching state
I0905 20:33:29.992483       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" with version 1947
I0905 20:33:29.992672       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: phase: Failed, bound to: "azuredisk-9241/pvc-tnsrj (uid: 58d5db18-b486-467c-8e46-4fc75532e379)", boundByController: true
I0905 20:33:29.992851       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: volume is bound to claim azuredisk-9241/pvc-tnsrj
I0905 20:33:29.992784       1 event.go:291] "Event occurred" object="pvc-58d5db18-b486-467c-8e46-4fc75532e379" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) since it's in attaching or detaching state"
I0905 20:33:29.993022       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: claim azuredisk-9241/pvc-tnsrj not found
I0905 20:33:29.993038       1 pv_controller.go:1108] reclaimVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: policy is Delete
I0905 20:33:29.993065       1 pv_controller.go:1752] scheduleOperation[delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]]
I0905 20:33:29.993151       1 pv_controller.go:1765] operation "delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]" postponed due to exponential backoff
I0905 20:33:33.081057       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="73.404µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45294" resp=200
I0905 20:33:33.310859       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-bv5pc ReadyCondition updated. Updating timestamp.
... skipping 2 lines ...
I0905 20:33:38.245881       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:33:43.081284       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="60.703µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35150" resp=200
I0905 20:33:43.121595       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0905 20:33:43.151874       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:33:43.244429       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:33:43.244502       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" with version 1947
I0905 20:33:43.244541       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: phase: Failed, bound to: "azuredisk-9241/pvc-tnsrj (uid: 58d5db18-b486-467c-8e46-4fc75532e379)", boundByController: true
I0905 20:33:43.244582       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: volume is bound to claim azuredisk-9241/pvc-tnsrj
I0905 20:33:43.244603       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: claim azuredisk-9241/pvc-tnsrj not found
I0905 20:33:43.244612       1 pv_controller.go:1108] reclaimVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: policy is Delete
I0905 20:33:43.244628       1 pv_controller.go:1752] scheduleOperation[delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]]
I0905 20:33:43.244663       1 pv_controller.go:1231] deleteVolumeOperation [pvc-58d5db18-b486-467c-8e46-4fc75532e379] started
I0905 20:33:43.247568       1 pv_controller.go:1340] isVolumeReleased[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: volume is released
I0905 20:33:43.247590       1 pv_controller.go:1404] doDeleteVolume [pvc-58d5db18-b486-467c-8e46-4fc75532e379]
I0905 20:33:43.247625       1 pv_controller.go:1259] deletion of volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) since it's in attaching or detaching state
I0905 20:33:43.247645       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: set phase Failed
I0905 20:33:43.247658       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: phase Failed already set
E0905 20:33:43.247693       1 goroutinemap.go:150] Operation for "delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]" failed. No retries permitted until 2022-09-05 20:33:44.247667487 +0000 UTC m=+598.641409876 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) since it's in attaching or detaching state
I0905 20:33:44.856383       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) returned with <nil>
I0905 20:33:44.856445       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379) succeeded
I0905 20:33:44.856803       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379 was detached from node:capz-06vmzc-md-0-bv5pc
I0905 20:33:44.856904       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379") on node "capz-06vmzc-md-0-bv5pc" 
I0905 20:33:48.148713       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 741 items received
I0905 20:33:52.661521       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
... skipping 6 lines ...
I0905 20:33:58.212133       1 controller.go:291] nodeSync has been triggered
I0905 20:33:58.212142       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0905 20:33:58.212176       1 controller.go:804] Finished updateLoadBalancerHosts
I0905 20:33:58.212185       1 controller.go:731] It took 4.3802e-05 seconds to finish nodeSyncInternal
I0905 20:33:58.245508       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:33:58.245754       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" with version 1947
I0905 20:33:58.245824       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: phase: Failed, bound to: "azuredisk-9241/pvc-tnsrj (uid: 58d5db18-b486-467c-8e46-4fc75532e379)", boundByController: true
I0905 20:33:58.245887       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: volume is bound to claim azuredisk-9241/pvc-tnsrj
I0905 20:33:58.245925       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: claim azuredisk-9241/pvc-tnsrj not found
I0905 20:33:58.245937       1 pv_controller.go:1108] reclaimVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: policy is Delete
I0905 20:33:58.245953       1 pv_controller.go:1752] scheduleOperation[delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]]
I0905 20:33:58.246011       1 gc_controller.go:161] GC'ing orphaned
I0905 20:33:58.246025       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 6 lines ...
I0905 20:34:03.596092       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-58d5db18-b486-467c-8e46-4fc75532e379
I0905 20:34:03.596130       1 pv_controller.go:1435] volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" deleted
I0905 20:34:03.596168       1 pv_controller.go:1283] deleteVolumeOperation [pvc-58d5db18-b486-467c-8e46-4fc75532e379]: success
I0905 20:34:03.609122       1 pv_protection_controller.go:205] Got event on PV pvc-58d5db18-b486-467c-8e46-4fc75532e379
I0905 20:34:03.609355       1 pv_protection_controller.go:125] Processing PV pvc-58d5db18-b486-467c-8e46-4fc75532e379
I0905 20:34:03.609166       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" with version 1998
I0905 20:34:03.609540       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: phase: Failed, bound to: "azuredisk-9241/pvc-tnsrj (uid: 58d5db18-b486-467c-8e46-4fc75532e379)", boundByController: true
I0905 20:34:03.609577       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: volume is bound to claim azuredisk-9241/pvc-tnsrj
I0905 20:34:03.609713       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: claim azuredisk-9241/pvc-tnsrj not found
I0905 20:34:03.610133       1 pv_controller.go:1108] reclaimVolume[pvc-58d5db18-b486-467c-8e46-4fc75532e379]: policy is Delete
I0905 20:34:03.610319       1 pv_controller.go:1752] scheduleOperation[delete-pvc-58d5db18-b486-467c-8e46-4fc75532e379[3909da75-0c77-420f-9305-347e57a6e5f0]]
I0905 20:34:03.610463       1 pv_controller.go:1231] deleteVolumeOperation [pvc-58d5db18-b486-467c-8e46-4fc75532e379] started
I0905 20:34:03.615398       1 pv_controller.go:1243] Volume "pvc-58d5db18-b486-467c-8e46-4fc75532e379" is already being deleted
... skipping 110 lines ...
I0905 20:34:11.343616       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9241
I0905 20:34:11.380357       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9241, name kube-root-ca.crt, uid ca66185f-d39c-46ed-98bb-ecfca65cfa97, event type delete
I0905 20:34:11.382169       1 publisher.go:186] Finished syncing namespace "azuredisk-9241" (1.761696ms)
I0905 20:34:11.413688       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9241, name default-token-7jxtq, uid 0f4789e9-eb0f-42fe-9ed9-f69fe47ee57e, event type delete
I0905 20:34:11.423746       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9241, name default, uid b633cf8d-2d45-4ce6-9922-d3cc9678f08a, event type delete
I0905 20:34:11.424795       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9241" (3.101µs)
E0905 20:34:11.430957       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9241/default: secrets "default-token-w6gb2" is forbidden: unable to create new content in namespace azuredisk-9241 because it is being terminated
I0905 20:34:11.431510       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9241/default), service account deleted, removing tokens
I0905 20:34:11.442906       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-6t9lm.171211143a6e61bb, uid 7f76d4d9-dc00-4a87-a8f0-37ed8cc8c42a, event type delete
I0905 20:34:11.446952       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-6t9lm.17121116b5837d42, uid f39b067a-ca64-44d7-8e66-3bbeeca86305, event type delete
I0905 20:34:11.450721       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-6t9lm.171211176346e62c, uid 53bc6c3d-375d-4652-93d2-d5b1c1e9d908, event type delete
I0905 20:34:11.454125       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-6t9lm.1712111765d18792, uid 40228063-d1f8-446d-99a4-b540ee07cba3, event type delete
I0905 20:34:11.457506       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-6t9lm.171211176c347ba5, uid bed9077b-df0d-4a4a-bbb4-d48381caaeaa, event type delete
... skipping 723 lines ...
I0905 20:35:28.313154       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: claim azuredisk-9336/pvc-hqr8j not found
I0905 20:35:28.313164       1 pv_controller.go:1108] reclaimVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: policy is Delete
I0905 20:35:28.313200       1 pv_controller.go:1752] scheduleOperation[delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]]
I0905 20:35:28.313262       1 pv_controller.go:1763] operation "delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]" is already running, skipping
I0905 20:35:28.327824       1 pv_controller.go:1340] isVolumeReleased[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is released
I0905 20:35:28.327865       1 pv_controller.go:1404] doDeleteVolume [pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]
I0905 20:35:28.451927       1 pv_controller.go:1259] deletion of volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:35:28.451954       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: set phase Failed
I0905 20:35:28.451964       1 pv_controller.go:858] updating PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: set phase Failed
I0905 20:35:28.457970       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" with version 2221
I0905 20:35:28.458685       1 pv_controller.go:879] volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" entered phase "Failed"
I0905 20:35:28.458721       1 pv_controller.go:901] volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:35:28.458006       1 pv_protection_controller.go:205] Got event on PV pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da
I0905 20:35:28.458027       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" with version 2221
E0905 20:35:28.458771       1 goroutinemap.go:150] Operation for "delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]" failed. No retries permitted until 2022-09-05 20:35:28.958749765 +0000 UTC m=+703.352492154 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:35:28.458777       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: phase: Failed, bound to: "azuredisk-9336/pvc-hqr8j (uid: af33e16b-cb85-444c-a8b2-cd90fa0cc3da)", boundByController: true
I0905 20:35:28.458807       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is bound to claim azuredisk-9336/pvc-hqr8j
I0905 20:35:28.458855       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: claim azuredisk-9336/pvc-hqr8j not found
I0905 20:35:28.458865       1 pv_controller.go:1108] reclaimVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: policy is Delete
I0905 20:35:28.458880       1 pv_controller.go:1752] scheduleOperation[delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]]
I0905 20:35:28.458888       1 pv_controller.go:1765] operation "delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]" postponed due to exponential backoff
I0905 20:35:28.458927       1 event.go:291] "Event occurred" object="pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
... skipping 67 lines ...
I0905 20:35:43.252734       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is bound to claim azuredisk-9336/pvc-j6psx
I0905 20:35:43.252748       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: claim azuredisk-9336/pvc-j6psx found: phase: Bound, bound to: "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba", bindCompleted: true, boundByController: true
I0905 20:35:43.252761       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: all is bound
I0905 20:35:43.252768       1 pv_controller.go:858] updating PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: set phase Bound
I0905 20:35:43.252777       1 pv_controller.go:861] updating PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase Bound already set
I0905 20:35:43.252789       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" with version 2221
I0905 20:35:43.252808       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: phase: Failed, bound to: "azuredisk-9336/pvc-hqr8j (uid: af33e16b-cb85-444c-a8b2-cd90fa0cc3da)", boundByController: true
I0905 20:35:43.252831       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is bound to claim azuredisk-9336/pvc-hqr8j
I0905 20:35:43.252850       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: claim azuredisk-9336/pvc-hqr8j not found
I0905 20:35:43.252862       1 pv_controller.go:1108] reclaimVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: policy is Delete
I0905 20:35:43.252878       1 pv_controller.go:1752] scheduleOperation[delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]]
I0905 20:35:43.252919       1 pv_controller.go:1231] deleteVolumeOperation [pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da] started
I0905 20:35:43.263783       1 pv_controller.go:1340] isVolumeReleased[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is released
I0905 20:35:43.263814       1 pv_controller.go:1404] doDeleteVolume [pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]
I0905 20:35:43.263852       1 pv_controller.go:1259] deletion of volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) since it's in attaching or detaching state
I0905 20:35:43.263867       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: set phase Failed
I0905 20:35:43.263879       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: phase Failed already set
E0905 20:35:43.263910       1 goroutinemap.go:150] Operation for "delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]" failed. No retries permitted until 2022-09-05 20:35:44.263888779 +0000 UTC m=+718.657631068 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) since it's in attaching or detaching state
I0905 20:35:44.155541       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0905 20:35:47.428231       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) returned with <nil>
I0905 20:35:47.428273       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da) succeeded
I0905 20:35:47.428285       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da was detached from node:capz-06vmzc-md-0-dbrp2
I0905 20:35:47.428310       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da") on node "capz-06vmzc-md-0-dbrp2" 
I0905 20:35:53.081511       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="103.405µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46264" resp=200
... skipping 46 lines ...
I0905 20:35:58.252947       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is bound to claim azuredisk-9336/pvc-j6psx
I0905 20:35:58.252961       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: claim azuredisk-9336/pvc-j6psx found: phase: Bound, bound to: "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba", bindCompleted: true, boundByController: true
I0905 20:35:58.252975       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: all is bound
I0905 20:35:58.252983       1 pv_controller.go:858] updating PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: set phase Bound
I0905 20:35:58.252993       1 pv_controller.go:861] updating PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase Bound already set
I0905 20:35:58.253005       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" with version 2221
I0905 20:35:58.253025       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: phase: Failed, bound to: "azuredisk-9336/pvc-hqr8j (uid: af33e16b-cb85-444c-a8b2-cd90fa0cc3da)", boundByController: true
I0905 20:35:58.253054       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is bound to claim azuredisk-9336/pvc-hqr8j
I0905 20:35:58.253074       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: claim azuredisk-9336/pvc-hqr8j not found
I0905 20:35:58.253082       1 pv_controller.go:1108] reclaimVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: policy is Delete
I0905 20:35:58.253097       1 pv_controller.go:1752] scheduleOperation[delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]]
I0905 20:35:58.253143       1 pv_controller.go:1231] deleteVolumeOperation [pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da] started
I0905 20:35:58.261088       1 pv_controller.go:1340] isVolumeReleased[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is released
... skipping 3 lines ...
I0905 20:36:03.488010       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da
I0905 20:36:03.488132       1 pv_controller.go:1435] volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" deleted
I0905 20:36:03.488213       1 pv_controller.go:1283] deleteVolumeOperation [pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: success
I0905 20:36:03.501702       1 pv_protection_controller.go:205] Got event on PV pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da
I0905 20:36:03.501737       1 pv_protection_controller.go:125] Processing PV pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da
I0905 20:36:03.502214       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" with version 2275
I0905 20:36:03.502294       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: phase: Failed, bound to: "azuredisk-9336/pvc-hqr8j (uid: af33e16b-cb85-444c-a8b2-cd90fa0cc3da)", boundByController: true
I0905 20:36:03.502360       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: volume is bound to claim azuredisk-9336/pvc-hqr8j
I0905 20:36:03.502405       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: claim azuredisk-9336/pvc-hqr8j not found
I0905 20:36:03.502421       1 pv_controller.go:1108] reclaimVolume[pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da]: policy is Delete
I0905 20:36:03.502439       1 pv_controller.go:1752] scheduleOperation[delete-pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da[f372afd7-f6b2-4f3b-948c-611b802d2e70]]
I0905 20:36:03.502489       1 pv_controller.go:1231] deleteVolumeOperation [pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da] started
I0905 20:36:03.507885       1 pv_controller.go:1243] Volume "pvc-af33e16b-cb85-444c-a8b2-cd90fa0cc3da" is already being deleted
... skipping 190 lines ...
I0905 20:36:39.487421       1 pv_controller.go:1108] reclaimVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: policy is Delete
I0905 20:36:39.487436       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]]
I0905 20:36:39.487443       1 pv_controller.go:1763] operation "delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]" is already running, skipping
I0905 20:36:39.487471       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba] started
I0905 20:36:39.490104       1 pv_controller.go:1340] isVolumeReleased[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is released
I0905 20:36:39.490123       1 pv_controller.go:1404] doDeleteVolume [pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]
I0905 20:36:39.541967       1 pv_controller.go:1259] deletion of volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:36:39.542009       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: set phase Failed
I0905 20:36:39.542021       1 pv_controller.go:858] updating PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: set phase Failed
I0905 20:36:39.546153       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" with version 2339
I0905 20:36:39.546467       1 pv_controller.go:879] volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" entered phase "Failed"
I0905 20:36:39.546649       1 pv_controller.go:901] volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:36:39.546881       1 goroutinemap.go:150] Operation for "delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]" failed. No retries permitted until 2022-09-05 20:36:40.04685774 +0000 UTC m=+774.440600129 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:36:39.546217       1 pv_protection_controller.go:205] Got event on PV pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba
I0905 20:36:39.546237       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" with version 2339
I0905 20:36:39.547297       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase: Failed, bound to: "azuredisk-9336/pvc-j6psx (uid: f0f049fc-3458-4c08-9628-f3bd5c6225ba)", boundByController: true
I0905 20:36:39.547466       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is bound to claim azuredisk-9336/pvc-j6psx
I0905 20:36:39.547621       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: claim azuredisk-9336/pvc-j6psx not found
I0905 20:36:39.548803       1 pv_controller.go:1108] reclaimVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: policy is Delete
I0905 20:36:39.547469       1 event.go:291] "Event occurred" object="pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
I0905 20:36:39.549051       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]]
I0905 20:36:39.549263       1 pv_controller.go:1765] operation "delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]" postponed due to exponential backoff
... skipping 11 lines ...
I0905 20:36:43.087395       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.104µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39112" resp=200
I0905 20:36:43.159496       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:36:43.254414       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:36:43.254511       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" with version 2339
I0905 20:36:43.254552       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-dqshn" with version 2025
I0905 20:36:43.254576       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-9336/pvc-dqshn]: phase: Bound, bound to: "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc", bindCompleted: true, boundByController: true
I0905 20:36:43.254645       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase: Failed, bound to: "azuredisk-9336/pvc-j6psx (uid: f0f049fc-3458-4c08-9628-f3bd5c6225ba)", boundByController: true
I0905 20:36:43.254701       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-dqshn]: volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" found: phase: Bound, bound to: "azuredisk-9336/pvc-dqshn (uid: b86301a5-3694-4de4-a889-8e81c7ed60dc)", boundByController: true
I0905 20:36:43.254720       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-dqshn]: claim is already correctly bound
I0905 20:36:43.254732       1 pv_controller.go:1012] binding volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" to claim "azuredisk-9336/pvc-dqshn"
I0905 20:36:43.254744       1 pv_controller.go:910] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: binding to "azuredisk-9336/pvc-dqshn"
I0905 20:36:43.254765       1 pv_controller.go:922] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: already bound to "azuredisk-9336/pvc-dqshn"
I0905 20:36:43.254779       1 pv_controller.go:858] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: set phase Bound
... skipping 16 lines ...
I0905 20:36:43.255141       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba] started
I0905 20:36:43.255516       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: all is bound
I0905 20:36:43.255533       1 pv_controller.go:858] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: set phase Bound
I0905 20:36:43.255543       1 pv_controller.go:861] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase Bound already set
I0905 20:36:43.260567       1 pv_controller.go:1340] isVolumeReleased[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is released
I0905 20:36:43.260588       1 pv_controller.go:1404] doDeleteVolume [pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]
I0905 20:36:43.260622       1 pv_controller.go:1259] deletion of volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) since it's in attaching or detaching state
I0905 20:36:43.260635       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: set phase Failed
I0905 20:36:43.260645       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase Failed already set
E0905 20:36:43.260674       1 goroutinemap.go:150] Operation for "delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]" failed. No retries permitted until 2022-09-05 20:36:44.260655405 +0000 UTC m=+778.654397794 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) since it's in attaching or detaching state
I0905 20:36:43.341560       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:36:44.167299       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0905 20:36:50.760540       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 16 items received
I0905 20:36:53.080829       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.704µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50998" resp=200
I0905 20:36:54.147933       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0905 20:36:57.246685       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba) returned with <nil>
... skipping 10 lines ...
I0905 20:36:58.255636       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is bound to claim azuredisk-9336/pvc-dqshn
I0905 20:36:58.255658       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: claim azuredisk-9336/pvc-dqshn found: phase: Bound, bound to: "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc", bindCompleted: true, boundByController: true
I0905 20:36:58.255679       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: all is bound
I0905 20:36:58.255693       1 pv_controller.go:858] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: set phase Bound
I0905 20:36:58.255705       1 pv_controller.go:861] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase Bound already set
I0905 20:36:58.255723       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" with version 2339
I0905 20:36:58.255747       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase: Failed, bound to: "azuredisk-9336/pvc-j6psx (uid: f0f049fc-3458-4c08-9628-f3bd5c6225ba)", boundByController: true
I0905 20:36:58.255774       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is bound to claim azuredisk-9336/pvc-j6psx
I0905 20:36:58.255800       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: claim azuredisk-9336/pvc-j6psx not found
I0905 20:36:58.255817       1 pv_controller.go:1108] reclaimVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: policy is Delete
I0905 20:36:58.255834       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]]
I0905 20:36:58.255869       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba] started
I0905 20:36:58.256061       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-dqshn" with version 2025
... skipping 21 lines ...
I0905 20:37:03.509292       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba
I0905 20:37:03.509326       1 pv_controller.go:1435] volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" deleted
I0905 20:37:03.509339       1 pv_controller.go:1283] deleteVolumeOperation [pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: success
I0905 20:37:03.519047       1 pv_protection_controller.go:205] Got event on PV pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba
I0905 20:37:03.519087       1 pv_protection_controller.go:125] Processing PV pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba
I0905 20:37:03.519562       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" with version 2377
I0905 20:37:03.519631       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: phase: Failed, bound to: "azuredisk-9336/pvc-j6psx (uid: f0f049fc-3458-4c08-9628-f3bd5c6225ba)", boundByController: true
I0905 20:37:03.519681       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: volume is bound to claim azuredisk-9336/pvc-j6psx
I0905 20:37:03.519709       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: claim azuredisk-9336/pvc-j6psx not found
I0905 20:37:03.519724       1 pv_controller.go:1108] reclaimVolume[pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba]: policy is Delete
I0905 20:37:03.519759       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]]
I0905 20:37:03.519773       1 pv_controller.go:1763] operation "delete-pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba[e863d3bd-5093-4d5a-8008-73aa1e170066]" is already running, skipping
I0905 20:37:03.524797       1 pv_controller_base.go:235] volume "pvc-f0f049fc-3458-4c08-9628-f3bd5c6225ba" deleted
... skipping 148 lines ...
I0905 20:37:36.215749       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: claim azuredisk-9336/pvc-dqshn not found
I0905 20:37:36.215763       1 pv_controller.go:1108] reclaimVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: policy is Delete
I0905 20:37:36.215777       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]]
I0905 20:37:36.215786       1 pv_controller.go:1763] operation "delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]" is already running, skipping
I0905 20:37:36.218750       1 pv_controller.go:1340] isVolumeReleased[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is released
I0905 20:37:36.218771       1 pv_controller.go:1404] doDeleteVolume [pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]
I0905 20:37:36.242725       1 pv_controller.go:1259] deletion of volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:37:36.242749       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: set phase Failed
I0905 20:37:36.242760       1 pv_controller.go:858] updating PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: set phase Failed
I0905 20:37:36.246689       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" with version 2440
I0905 20:37:36.246726       1 pv_controller.go:879] volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" entered phase "Failed"
I0905 20:37:36.246851       1 pv_controller.go:901] volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
E0905 20:37:36.246945       1 goroutinemap.go:150] Operation for "delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]" failed. No retries permitted until 2022-09-05 20:37:36.746924589 +0000 UTC m=+831.140666978 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted
I0905 20:37:36.247289       1 event.go:291] "Event occurred" object="pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-bv5pc), could not be deleted"
I0905 20:37:36.247614       1 pv_protection_controller.go:205] Got event on PV pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc
I0905 20:37:36.247798       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" with version 2440
I0905 20:37:36.247981       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase: Failed, bound to: "azuredisk-9336/pvc-dqshn (uid: b86301a5-3694-4de4-a889-8e81c7ed60dc)", boundByController: true
I0905 20:37:36.248277       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is bound to claim azuredisk-9336/pvc-dqshn
I0905 20:37:36.248454       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: claim azuredisk-9336/pvc-dqshn not found
I0905 20:37:36.248586       1 pv_controller.go:1108] reclaimVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: policy is Delete
I0905 20:37:36.248732       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]]
I0905 20:37:36.249639       1 pv_controller.go:1765] operation "delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]" postponed due to exponential backoff
I0905 20:37:38.254461       1 gc_controller.go:161] GC'ing orphaned
... skipping 10 lines ...
I0905 20:37:39.464605       1 azure_controller_standard.go:166] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc)
I0905 20:37:42.146642       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0905 20:37:43.087846       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="80.804µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41974" resp=200
I0905 20:37:43.162763       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:37:43.256828       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:37:43.256897       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" with version 2440
I0905 20:37:43.256936       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase: Failed, bound to: "azuredisk-9336/pvc-dqshn (uid: b86301a5-3694-4de4-a889-8e81c7ed60dc)", boundByController: true
I0905 20:37:43.256986       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is bound to claim azuredisk-9336/pvc-dqshn
I0905 20:37:43.257009       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: claim azuredisk-9336/pvc-dqshn not found
I0905 20:37:43.257023       1 pv_controller.go:1108] reclaimVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: policy is Delete
I0905 20:37:43.257041       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]]
I0905 20:37:43.257075       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc] started
I0905 20:37:43.262060       1 pv_controller.go:1340] isVolumeReleased[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is released
I0905 20:37:43.262085       1 pv_controller.go:1404] doDeleteVolume [pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]
I0905 20:37:43.262119       1 pv_controller.go:1259] deletion of volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) since it's in attaching or detaching state
I0905 20:37:43.262133       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: set phase Failed
I0905 20:37:43.262144       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase Failed already set
E0905 20:37:43.262173       1 goroutinemap.go:150] Operation for "delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]" failed. No retries permitted until 2022-09-05 20:37:44.262153526 +0000 UTC m=+838.655895815 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) since it's in attaching or detaching state
I0905 20:37:43.352057       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-bv5pc ReadyCondition updated. Updating timestamp.
I0905 20:37:53.082106       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="79.404µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51360" resp=200
I0905 20:37:53.264643       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:37:55.042203       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) returned with <nil>
I0905 20:37:55.042251       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc) succeeded
I0905 20:37:55.042266       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc was detached from node:capz-06vmzc-md-0-bv5pc
I0905 20:37:55.042295       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc") on node "capz-06vmzc-md-0-bv5pc" 
I0905 20:37:58.154213       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:37:58.163357       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:37:58.254908       1 gc_controller.go:161] GC'ing orphaned
I0905 20:37:58.254968       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:37:58.256937       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:37:58.256988       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" with version 2440
I0905 20:37:58.257018       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase: Failed, bound to: "azuredisk-9336/pvc-dqshn (uid: b86301a5-3694-4de4-a889-8e81c7ed60dc)", boundByController: true
I0905 20:37:58.257042       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is bound to claim azuredisk-9336/pvc-dqshn
I0905 20:37:58.257056       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: claim azuredisk-9336/pvc-dqshn not found
I0905 20:37:58.257064       1 pv_controller.go:1108] reclaimVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: policy is Delete
I0905 20:37:58.257076       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]]
I0905 20:37:58.257106       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc] started
I0905 20:37:58.262959       1 pv_controller.go:1340] isVolumeReleased[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is released
... skipping 3 lines ...
I0905 20:38:03.556677       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc
I0905 20:38:03.556829       1 pv_controller.go:1435] volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" deleted
I0905 20:38:03.556938       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: success
I0905 20:38:03.570496       1 pv_protection_controller.go:205] Got event on PV pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc
I0905 20:38:03.570693       1 pv_protection_controller.go:125] Processing PV pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc
I0905 20:38:03.570657       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" with version 2482
I0905 20:38:03.571183       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: phase: Failed, bound to: "azuredisk-9336/pvc-dqshn (uid: b86301a5-3694-4de4-a889-8e81c7ed60dc)", boundByController: true
I0905 20:38:03.571263       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: volume is bound to claim azuredisk-9336/pvc-dqshn
I0905 20:38:03.571310       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: claim azuredisk-9336/pvc-dqshn not found
I0905 20:38:03.571362       1 pv_controller.go:1108] reclaimVolume[pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc]: policy is Delete
I0905 20:38:03.571405       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc[308b4f06-95e8-4c9c-9d0b-477a29253d3f]]
I0905 20:38:03.571495       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc] started
I0905 20:38:03.577252       1 pv_controller_base.go:235] volume "pvc-b86301a5-3694-4de4-a889-8e81c7ed60dc" deleted
... skipping 41 lines ...
I0905 20:38:09.046796       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-2205/pvc-zgfkq]: no volume found
I0905 20:38:09.047270       1 pv_controller.go:1445] provisionClaim[azuredisk-2205/pvc-zgfkq]: started
I0905 20:38:09.047301       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-2205/pvc-zgfkq[07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]]
I0905 20:38:09.047369       1 pv_controller.go:1485] provisionClaimOperation [azuredisk-2205/pvc-zgfkq] started, class: "azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-vdbtb"
I0905 20:38:09.047425       1 pv_controller.go:1500] provisionClaimOperation [azuredisk-2205/pvc-zgfkq]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0905 20:38:09.049643       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq" duration="28.343606ms"
I0905 20:38:09.049687       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-f8klq\": the object has been modified; please apply your changes to the latest version and try again"
I0905 20:38:09.049726       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq" startTime="2022-09-05 20:38:09.049705711 +0000 UTC m=+863.443448000"
I0905 20:38:09.050144       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-f8klq" timed out (false) [last progress check: 2022-09-05 20:38:09 +0000 UTC - now: 2022-09-05 20:38:09.050136736 +0000 UTC m=+863.443879125]
I0905 20:38:09.050484       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94" (23.167712ms)
I0905 20:38:09.050525       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94", timestamp:time.Time{wall:0xc0bdb56c41a3c8aa, ext:863421253343, loc:(*time.Location)(0x751a1a0)}}
I0905 20:38:09.050592       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0905 20:38:09.051399       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94"
... skipping 99 lines ...
I0905 20:38:12.114985       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe") from node "capz-06vmzc-md-0-bv5pc" 
I0905 20:38:12.166038       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" to node "capz-06vmzc-md-0-bv5pc".
I0905 20:38:12.212117       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" lun 0 to node "capz-06vmzc-md-0-bv5pc".
I0905 20:38:12.212162       1 azure_controller_standard.go:93] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-bv5pc) - attach disk(capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) with DiskEncryptionSetID()
I0905 20:38:12.437829       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9336
I0905 20:38:12.478639       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9336, name default-token-fxghr, uid 306e4abf-5830-4e00-864f-751956767c1f, event type delete
E0905 20:38:12.496525       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9336/default: secrets "default-token-dxv68" is forbidden: unable to create new content in namespace azuredisk-9336 because it is being terminated
I0905 20:38:12.515016       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9336, name kube-root-ca.crt, uid de15215d-19e0-4c8d-9553-4fd56fcc89d4, event type delete
I0905 20:38:12.517943       1 publisher.go:186] Finished syncing namespace "azuredisk-9336" (2.877563ms)
I0905 20:38:12.521685       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9336/default), service account deleted, removing tokens
I0905 20:38:12.522375       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9336, name default, uid de4c037b-61dc-427c-b527-1f821b327d86, event type delete
I0905 20:38:12.522420       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9336" (3µs)
I0905 20:38:12.554045       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-fzt5f.171211222e6ae085, uid b785aa07-0116-4ea5-b10a-c447640b247e, event type delete
... skipping 164 lines ...
I0905 20:38:29.185711       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94" (685.038µs)
I0905 20:38:29.190696       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq" duration="7.121403ms"
I0905 20:38:29.190912       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq"
I0905 20:38:29.190974       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq" startTime="2022-09-05 20:38:29.190931468 +0000 UTC m=+883.584673757"
I0905 20:38:29.191369       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-f8klq" for a progress check after 596s
I0905 20:38:29.191404       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-f8klq" duration="460.226µs"
W0905 20:38:29.233837       1 reconciler.go:385] Multi-Attach error for volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe") from node "capz-06vmzc-md-0-dbrp2" Volume is already used by pods azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94-srz2g on node capz-06vmzc-md-0-bv5pc
I0905 20:38:29.233964       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-f8klq-c66764f94-fhc6b" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe\" Volume is already used by pod(s) azuredisk-volume-tester-f8klq-c66764f94-srz2g"
I0905 20:38:31.693454       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:38:33.081885       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="69.704µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35442" resp=200
I0905 20:38:33.358516       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:38:35.148536       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 0 items received
I0905 20:38:37.156801       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 50 items received
I0905 20:38:38.256697       1 gc_controller.go:161] GC'ing orphaned
... skipping 388 lines ...
I0905 20:40:08.853088       1 pv_controller.go:1108] reclaimVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: policy is Delete
I0905 20:40:08.853101       1 pv_controller.go:1752] scheduleOperation[delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]]
I0905 20:40:08.853109       1 pv_controller.go:1763] operation "delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]" is already running, skipping
I0905 20:40:08.853005       1 pv_controller.go:1231] deleteVolumeOperation [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe] started
I0905 20:40:08.855340       1 pv_controller.go:1340] isVolumeReleased[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is released
I0905 20:40:08.855366       1 pv_controller.go:1404] doDeleteVolume [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]
I0905 20:40:08.880014       1 pv_controller.go:1259] deletion of volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:40:08.880036       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: set phase Failed
I0905 20:40:08.880046       1 pv_controller.go:858] updating PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: set phase Failed
I0905 20:40:08.882932       1 pv_protection_controller.go:205] Got event on PV pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe
I0905 20:40:08.883447       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" with version 2780
I0905 20:40:08.883871       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: phase: Failed, bound to: "azuredisk-2205/pvc-zgfkq (uid: 07f8e9ef-f25a-42af-b307-1abf0b3d9bfe)", boundByController: true
I0905 20:40:08.883906       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is bound to claim azuredisk-2205/pvc-zgfkq
I0905 20:40:08.883928       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: claim azuredisk-2205/pvc-zgfkq not found
I0905 20:40:08.884000       1 pv_controller.go:1108] reclaimVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: policy is Delete
I0905 20:40:08.884121       1 pv_controller.go:1752] scheduleOperation[delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]]
I0905 20:40:08.884135       1 pv_controller.go:1763] operation "delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]" is already running, skipping
I0905 20:40:08.883607       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" with version 2780
I0905 20:40:08.884183       1 pv_controller.go:879] volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" entered phase "Failed"
I0905 20:40:08.884285       1 pv_controller.go:901] volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:40:08.884370       1 goroutinemap.go:150] Operation for "delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]" failed. No retries permitted until 2022-09-05 20:40:09.384350071 +0000 UTC m=+983.778092460 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:40:08.884550       1 event.go:291] "Event occurred" object="pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
I0905 20:40:08.908046       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe on node "capz-06vmzc-md-0-dbrp2"
I0905 20:40:09.388769       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:40:11.789595       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:40:11.790162       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe to the node "capz-06vmzc-md-0-dbrp2" mounted false
I0905 20:40:11.839033       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-06vmzc-md-0-dbrp2" succeeded. VolumesAttached: []
... skipping 5 lines ...
I0905 20:40:11.844372       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe"
I0905 20:40:11.844473       1 azure_controller_standard.go:166] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe)
I0905 20:40:13.080934       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="76.104µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59910" resp=200
I0905 20:40:13.170854       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:40:13.262707       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:40:13.262777       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" with version 2780
I0905 20:40:13.262833       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: phase: Failed, bound to: "azuredisk-2205/pvc-zgfkq (uid: 07f8e9ef-f25a-42af-b307-1abf0b3d9bfe)", boundByController: true
I0905 20:40:13.262888       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is bound to claim azuredisk-2205/pvc-zgfkq
I0905 20:40:13.262922       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: claim azuredisk-2205/pvc-zgfkq not found
I0905 20:40:13.262931       1 pv_controller.go:1108] reclaimVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: policy is Delete
I0905 20:40:13.262947       1 pv_controller.go:1752] scheduleOperation[delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]]
I0905 20:40:13.262992       1 pv_controller.go:1231] deleteVolumeOperation [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe] started
I0905 20:40:13.267304       1 pv_controller.go:1340] isVolumeReleased[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is released
I0905 20:40:13.267337       1 pv_controller.go:1404] doDeleteVolume [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]
I0905 20:40:13.267367       1 pv_controller.go:1259] deletion of volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) since it's in attaching or detaching state
I0905 20:40:13.267380       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: set phase Failed
I0905 20:40:13.267389       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: phase Failed already set
E0905 20:40:13.267413       1 goroutinemap.go:150] Operation for "delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]" failed. No retries permitted until 2022-09-05 20:40:14.267396982 +0000 UTC m=+988.661139271 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe) since it's in attaching or detaching state
I0905 20:40:13.381859       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:40:17.363847       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0905 20:40:18.261231       1 gc_controller.go:161] GC'ing orphaned
I0905 20:40:18.261272       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:40:18.275024       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:40:18.382729       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-control-plane-tgzc4 ReadyCondition updated. Updating timestamp.
... skipping 5 lines ...
I0905 20:40:27.296946       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe was detached from node:capz-06vmzc-md-0-dbrp2
I0905 20:40:27.297007       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe") on node "capz-06vmzc-md-0-dbrp2" 
I0905 20:40:28.156402       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:40:28.171623       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:40:28.263092       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:40:28.263244       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" with version 2780
I0905 20:40:28.263322       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: phase: Failed, bound to: "azuredisk-2205/pvc-zgfkq (uid: 07f8e9ef-f25a-42af-b307-1abf0b3d9bfe)", boundByController: true
I0905 20:40:28.263387       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is bound to claim azuredisk-2205/pvc-zgfkq
I0905 20:40:28.263453       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: claim azuredisk-2205/pvc-zgfkq not found
I0905 20:40:28.263468       1 pv_controller.go:1108] reclaimVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: policy is Delete
I0905 20:40:28.263486       1 pv_controller.go:1752] scheduleOperation[delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]]
I0905 20:40:28.263532       1 pv_controller.go:1231] deleteVolumeOperation [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe] started
I0905 20:40:28.268658       1 pv_controller.go:1340] isVolumeReleased[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is released
... skipping 3 lines ...
I0905 20:40:33.515000       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe
I0905 20:40:33.515042       1 pv_controller.go:1435] volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" deleted
I0905 20:40:33.515062       1 pv_controller.go:1283] deleteVolumeOperation [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: success
I0905 20:40:33.525949       1 pv_protection_controller.go:205] Got event on PV pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe
I0905 20:40:33.525984       1 pv_protection_controller.go:125] Processing PV pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe
I0905 20:40:33.526186       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" with version 2820
I0905 20:40:33.526391       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: phase: Failed, bound to: "azuredisk-2205/pvc-zgfkq (uid: 07f8e9ef-f25a-42af-b307-1abf0b3d9bfe)", boundByController: true
I0905 20:40:33.526520       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: volume is bound to claim azuredisk-2205/pvc-zgfkq
I0905 20:40:33.526550       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: claim azuredisk-2205/pvc-zgfkq not found
I0905 20:40:33.526709       1 pv_controller.go:1108] reclaimVolume[pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe]: policy is Delete
I0905 20:40:33.526732       1 pv_controller.go:1752] scheduleOperation[delete-pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe[b945dffe-0134-40d1-864b-a9852863323a]]
I0905 20:40:33.526988       1 pv_controller.go:1231] deleteVolumeOperation [pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe] started
I0905 20:40:33.530667       1 pv_controller.go:1243] Volume "pvc-07f8e9ef-f25a-42af-b307-1abf0b3d9bfe" is already being deleted
... skipping 205 lines ...
I0905 20:40:55.979407       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (11.147632ms)
I0905 20:40:55.984955       1 publisher.go:186] Finished syncing namespace "azuredisk-1387" (16.40993ms)
I0905 20:40:56.336429       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0905 20:40:56.358692       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 491844f7-9904-431e-b11d-dcad79264ed0, event type delete
I0905 20:40:56.361309       1 publisher.go:186] Finished syncing namespace "azuredisk-3410" (2.555744ms)
I0905 20:40:56.391877       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-svl52, uid 60a0cc60-654e-4a84-97e0-3c286a3a77ea, event type delete
E0905 20:40:56.410956       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-cfmgh" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0905 20:40:56.428759       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-bdq4c.1712117c5ea2808c, uid 9c9116fc-2a4e-44e9-9297-8e49dcde1041, event type delete
I0905 20:40:56.451288       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0905 20:40:56.451444       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid 023a825a-561c-4fa7-b297-caa340c43f20, event type delete
I0905 20:40:56.451618       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (2.2µs)
I0905 20:40:56.505050       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (3.7µs)
I0905 20:40:56.506072       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3410, estimate: 0, errors: <nil>
... skipping 19 lines ...
I0905 20:40:57.787117       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1387/pvc-bqhm6]: no volume found
I0905 20:40:57.787190       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-1387/pvc-bqhm6] status: set phase Pending
I0905 20:40:57.787271       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-bqhm6] status: phase Pending already set
I0905 20:40:57.787316       1 event.go:291] "Event occurred" object="azuredisk-1387/pvc-bqhm6" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0905 20:40:57.833966       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0905 20:40:57.877106       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-86qb5, uid a6966f34-2fc8-4db9-a4fd-8ec754ae9d3d, event type delete
E0905 20:40:57.889846       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-8mm2c" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0905 20:40:57.896714       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="azuredisk-1387/azuredisk-volume-tester-tdw65" podUID=9abb90b3-a297-459b-904a-929f0ffcd42b
I0905 20:40:57.897196       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-1387/pvc-zqz9r"
I0905 20:40:57.897363       1 pvc_protection_controller.go:159] "Finished processing PVC" PVC="azuredisk-1387/pvc-zqz9r" duration="5.1µs"
I0905 20:40:57.897488       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-1387/pvc-2wmjv"
I0905 20:40:57.897619       1 pvc_protection_controller.go:159] "Finished processing PVC" PVC="azuredisk-1387/pvc-2wmjv" duration="3.9µs"
I0905 20:40:57.898385       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-1387/pvc-bqhm6"
... skipping 582 lines ...
I0905 20:41:30.461342       1 pv_controller.go:1231] deleteVolumeOperation [pvc-be3c4196-98b0-4993-b2df-907c3d698924] started
I0905 20:41:30.461509       1 pv_controller.go:1108] reclaimVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: policy is Delete
I0905 20:41:30.461534       1 pv_controller.go:1752] scheduleOperation[delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]]
I0905 20:41:30.461543       1 pv_controller.go:1763] operation "delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]" is already running, skipping
I0905 20:41:30.463435       1 pv_controller.go:1340] isVolumeReleased[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: volume is released
I0905 20:41:30.463454       1 pv_controller.go:1404] doDeleteVolume [pvc-be3c4196-98b0-4993-b2df-907c3d698924]
I0905 20:41:30.489488       1 pv_controller.go:1259] deletion of volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:41:30.489522       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: set phase Failed
I0905 20:41:30.489531       1 pv_controller.go:858] updating PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: set phase Failed
I0905 20:41:30.493802       1 pv_protection_controller.go:205] Got event on PV pvc-be3c4196-98b0-4993-b2df-907c3d698924
I0905 20:41:30.493802       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" with version 3051
I0905 20:41:30.494262       1 pv_controller.go:879] volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" entered phase "Failed"
I0905 20:41:30.494293       1 pv_controller.go:901] volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:41:30.494340       1 goroutinemap.go:150] Operation for "delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]" failed. No retries permitted until 2022-09-05 20:41:30.994318853 +0000 UTC m=+1065.388061242 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:41:30.493828       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" with version 3051
I0905 20:41:30.494557       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: phase: Failed, bound to: "azuredisk-1387/pvc-bqhm6 (uid: be3c4196-98b0-4993-b2df-907c3d698924)", boundByController: true
I0905 20:41:30.494642       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: volume is bound to claim azuredisk-1387/pvc-bqhm6
I0905 20:41:30.494706       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: claim azuredisk-1387/pvc-bqhm6 not found
I0905 20:41:30.494723       1 pv_controller.go:1108] reclaimVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: policy is Delete
I0905 20:41:30.494739       1 pv_controller.go:1752] scheduleOperation[delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]]
I0905 20:41:30.494783       1 pv_controller.go:1765] operation "delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]" postponed due to exponential backoff
I0905 20:41:30.494891       1 event.go:291] "Event occurred" object="pvc-be3c4196-98b0-4993-b2df-907c3d698924" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
... skipping 57 lines ...
I0905 20:41:43.267367       1 pv_controller.go:858] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: set phase Bound
I0905 20:41:43.267377       1 pv_controller.go:861] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: phase Bound already set
I0905 20:41:43.267377       1 pv_controller.go:861] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: phase Bound already set
I0905 20:41:43.267386       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-1387/pvc-zqz9r]: binding to "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76"
I0905 20:41:43.267393       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" with version 3051
I0905 20:41:43.267405       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-1387/pvc-zqz9r]: already bound to "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76"
I0905 20:41:43.267414       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: phase: Failed, bound to: "azuredisk-1387/pvc-bqhm6 (uid: be3c4196-98b0-4993-b2df-907c3d698924)", boundByController: true
I0905 20:41:43.267416       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-1387/pvc-zqz9r] status: set phase Bound
I0905 20:41:43.267436       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: volume is bound to claim azuredisk-1387/pvc-bqhm6
I0905 20:41:43.267440       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-zqz9r] status: phase Bound already set
I0905 20:41:43.267453       1 pv_controller.go:1038] volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" bound to claim "azuredisk-1387/pvc-zqz9r"
I0905 20:41:43.267457       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: claim azuredisk-1387/pvc-bqhm6 not found
I0905 20:41:43.267465       1 pv_controller.go:1108] reclaimVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: policy is Delete
... skipping 16 lines ...
I0905 20:41:43.267641       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-2wmjv] status: phase Bound already set
I0905 20:41:43.267651       1 pv_controller.go:1038] volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" bound to claim "azuredisk-1387/pvc-2wmjv"
I0905 20:41:43.267666       1 pv_controller.go:1039] volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-2wmjv (uid: bd538b6d-953e-4cd1-bb23-b7843b55df42)", boundByController: true
I0905 20:41:43.267683       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-2wmjv" status after binding: phase: Bound, bound to: "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42", bindCompleted: true, boundByController: true
I0905 20:41:43.275280       1 pv_controller.go:1340] isVolumeReleased[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: volume is released
I0905 20:41:43.275307       1 pv_controller.go:1404] doDeleteVolume [pvc-be3c4196-98b0-4993-b2df-907c3d698924]
I0905 20:41:43.275343       1 pv_controller.go:1259] deletion of volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) since it's in attaching or detaching state
I0905 20:41:43.275363       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: set phase Failed
I0905 20:41:43.275374       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: phase Failed already set
E0905 20:41:43.275409       1 goroutinemap.go:150] Operation for "delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]" failed. No retries permitted until 2022-09-05 20:41:44.275389258 +0000 UTC m=+1078.669131647 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) since it's in attaching or detaching state
I0905 20:41:44.000524       1 secrets.go:73] Expired bootstrap token in kube-system/bootstrap-token-fw7bhx Secret: 2022-09-05T20:41:44Z
I0905 20:41:44.000559       1 tokencleaner.go:194] Deleting expired secret kube-system/bootstrap-token-fw7bhx
I0905 20:41:44.005732       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-fw7bhx, uid e71bf5ba-6bfb-4cb5-84df-b3e1c66aafe1, event type delete
I0905 20:41:44.009575       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-fw7bhx" (9.072514ms)
I0905 20:41:44.837576       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:41:47.493030       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924) returned with <nil>
... skipping 46 lines ...
I0905 20:41:58.268590       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: volume is bound to claim azuredisk-1387/pvc-zqz9r
I0905 20:41:58.268610       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: claim azuredisk-1387/pvc-zqz9r found: phase: Bound, bound to: "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76", bindCompleted: true, boundByController: true
I0905 20:41:58.268626       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: all is bound
I0905 20:41:58.268635       1 pv_controller.go:858] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: set phase Bound
I0905 20:41:58.268646       1 pv_controller.go:861] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: phase Bound already set
I0905 20:41:58.268666       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" with version 3051
I0905 20:41:58.268690       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: phase: Failed, bound to: "azuredisk-1387/pvc-bqhm6 (uid: be3c4196-98b0-4993-b2df-907c3d698924)", boundByController: true
I0905 20:41:58.268713       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: volume is bound to claim azuredisk-1387/pvc-bqhm6
I0905 20:41:58.268738       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: claim azuredisk-1387/pvc-bqhm6 not found
I0905 20:41:58.268746       1 pv_controller.go:1108] reclaimVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: policy is Delete
I0905 20:41:58.268762       1 pv_controller.go:1752] scheduleOperation[delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]]
I0905 20:41:58.268794       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" with version 2971
I0905 20:41:58.268813       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: phase: Bound, bound to: "azuredisk-1387/pvc-2wmjv (uid: bd538b6d-953e-4cd1-bb23-b7843b55df42)", boundByController: true
... skipping 19 lines ...
I0905 20:42:03.714887       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-be3c4196-98b0-4993-b2df-907c3d698924
I0905 20:42:03.714920       1 pv_controller.go:1435] volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" deleted
I0905 20:42:03.714933       1 pv_controller.go:1283] deleteVolumeOperation [pvc-be3c4196-98b0-4993-b2df-907c3d698924]: success
I0905 20:42:03.719750       1 pv_protection_controller.go:205] Got event on PV pvc-be3c4196-98b0-4993-b2df-907c3d698924
I0905 20:42:03.719777       1 pv_protection_controller.go:125] Processing PV pvc-be3c4196-98b0-4993-b2df-907c3d698924
I0905 20:42:03.720067       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" with version 3107
I0905 20:42:03.720101       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: phase: Failed, bound to: "azuredisk-1387/pvc-bqhm6 (uid: be3c4196-98b0-4993-b2df-907c3d698924)", boundByController: true
I0905 20:42:03.720128       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: volume is bound to claim azuredisk-1387/pvc-bqhm6
I0905 20:42:03.720156       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: claim azuredisk-1387/pvc-bqhm6 not found
I0905 20:42:03.720163       1 pv_controller.go:1108] reclaimVolume[pvc-be3c4196-98b0-4993-b2df-907c3d698924]: policy is Delete
I0905 20:42:03.720176       1 pv_controller.go:1752] scheduleOperation[delete-pvc-be3c4196-98b0-4993-b2df-907c3d698924[ab8aad61-89aa-4e37-9306-d18b4f3d6a10]]
I0905 20:42:03.720196       1 pv_controller.go:1231] deleteVolumeOperation [pvc-be3c4196-98b0-4993-b2df-907c3d698924] started
I0905 20:42:03.724380       1 pv_controller_base.go:235] volume "pvc-be3c4196-98b0-4993-b2df-907c3d698924" deleted
... skipping 45 lines ...
I0905 20:42:06.933167       1 pv_controller.go:1108] reclaimVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: policy is Delete
I0905 20:42:06.933180       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]]
I0905 20:42:06.933186       1 pv_controller.go:1763] operation "delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]" is already running, skipping
I0905 20:42:06.933213       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42] started
I0905 20:42:06.934920       1 pv_controller.go:1340] isVolumeReleased[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: volume is released
I0905 20:42:06.935089       1 pv_controller.go:1404] doDeleteVolume [pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]
I0905 20:42:06.935233       1 pv_controller.go:1259] deletion of volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42) since it's in attaching or detaching state
I0905 20:42:06.935254       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: set phase Failed
I0905 20:42:06.935263       1 pv_controller.go:858] updating PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: set phase Failed
I0905 20:42:06.938244       1 pv_protection_controller.go:205] Got event on PV pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42
I0905 20:42:06.938636       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" with version 3115
I0905 20:42:06.938677       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: phase: Failed, bound to: "azuredisk-1387/pvc-2wmjv (uid: bd538b6d-953e-4cd1-bb23-b7843b55df42)", boundByController: true
I0905 20:42:06.938701       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: volume is bound to claim azuredisk-1387/pvc-2wmjv
I0905 20:42:06.938851       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: claim azuredisk-1387/pvc-2wmjv not found
I0905 20:42:06.938868       1 pv_controller.go:1108] reclaimVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: policy is Delete
I0905 20:42:06.938883       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]]
I0905 20:42:06.938891       1 pv_controller.go:1763] operation "delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]" is already running, skipping
I0905 20:42:06.939038       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" with version 3115
I0905 20:42:06.939069       1 pv_controller.go:879] volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" entered phase "Failed"
I0905 20:42:06.939079       1 pv_controller.go:901] volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42) since it's in attaching or detaching state
E0905 20:42:06.939121       1 goroutinemap.go:150] Operation for "delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]" failed. No retries permitted until 2022-09-05 20:42:07.439100455 +0000 UTC m=+1101.832842844 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42) since it's in attaching or detaching state
I0905 20:42:06.939366       1 event.go:291] "Event occurred" object="pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42) since it's in attaching or detaching state"
I0905 20:42:07.175037       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0905 20:42:13.081658       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="98.105µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:43674" resp=200
I0905 20:42:13.177097       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:42:13.268517       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:42:13.268604       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" with version 2962
I0905 20:42:13.268649       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: phase: Bound, bound to: "azuredisk-1387/pvc-zqz9r (uid: ac473bc7-9c73-4d6c-80b2-75daae6f9d76)", boundByController: true
I0905 20:42:13.268688       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: volume is bound to claim azuredisk-1387/pvc-zqz9r
I0905 20:42:13.268712       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: claim azuredisk-1387/pvc-zqz9r found: phase: Bound, bound to: "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76", bindCompleted: true, boundByController: true
I0905 20:42:13.268733       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: all is bound
I0905 20:42:13.268748       1 pv_controller.go:858] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: set phase Bound
I0905 20:42:13.268760       1 pv_controller.go:861] updating PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: phase Bound already set
I0905 20:42:13.268778       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" with version 3115
I0905 20:42:13.268816       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: phase: Failed, bound to: "azuredisk-1387/pvc-2wmjv (uid: bd538b6d-953e-4cd1-bb23-b7843b55df42)", boundByController: true
I0905 20:42:13.268854       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: volume is bound to claim azuredisk-1387/pvc-2wmjv
I0905 20:42:13.268878       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: claim azuredisk-1387/pvc-2wmjv not found
I0905 20:42:13.268887       1 pv_controller.go:1108] reclaimVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: policy is Delete
I0905 20:42:13.268910       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]]
I0905 20:42:13.268938       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42] started
I0905 20:42:13.269219       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-zqz9r" with version 2964
... skipping 11 lines ...
I0905 20:42:13.269442       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-zqz9r] status: phase Bound already set
I0905 20:42:13.269460       1 pv_controller.go:1038] volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" bound to claim "azuredisk-1387/pvc-zqz9r"
I0905 20:42:13.269479       1 pv_controller.go:1039] volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-zqz9r (uid: ac473bc7-9c73-4d6c-80b2-75daae6f9d76)", boundByController: true
I0905 20:42:13.269501       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-zqz9r" status after binding: phase: Bound, bound to: "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76", bindCompleted: true, boundByController: true
I0905 20:42:13.275023       1 pv_controller.go:1340] isVolumeReleased[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: volume is released
I0905 20:42:13.275046       1 pv_controller.go:1404] doDeleteVolume [pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]
I0905 20:42:13.275085       1 pv_controller.go:1259] deletion of volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42) since it's in attaching or detaching state
I0905 20:42:13.275100       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: set phase Failed
I0905 20:42:13.275112       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: phase Failed already set
E0905 20:42:13.275144       1 goroutinemap.go:150] Operation for "delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]" failed. No retries permitted until 2022-09-05 20:42:14.27512102 +0000 UTC m=+1108.668863409 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42) since it's in attaching or detaching state
I0905 20:42:14.895040       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:42:18.215974       1 controller.go:272] Triggering nodeSync
I0905 20:42:18.216008       1 controller.go:291] nodeSync has been triggered
I0905 20:42:18.216017       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0905 20:42:18.216028       1 controller.go:804] Finished updateLoadBalancerHosts
I0905 20:42:18.216035       1 controller.go:731] It took 1.9201e-05 seconds to finish nodeSyncInternal
... skipping 8 lines ...
I0905 20:42:26.207779       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:42:26.331594       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:42:28.158812       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:42:28.178236       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:42:28.268791       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:42:28.268857       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" with version 3115
I0905 20:42:28.268893       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: phase: Failed, bound to: "azuredisk-1387/pvc-2wmjv (uid: bd538b6d-953e-4cd1-bb23-b7843b55df42)", boundByController: true
I0905 20:42:28.268928       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: volume is bound to claim azuredisk-1387/pvc-2wmjv
I0905 20:42:28.268958       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: claim azuredisk-1387/pvc-2wmjv not found
I0905 20:42:28.268966       1 pv_controller.go:1108] reclaimVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: policy is Delete
I0905 20:42:28.268982       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]]
I0905 20:42:28.268997       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" with version 2962
I0905 20:42:28.269016       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76]: phase: Bound, bound to: "azuredisk-1387/pvc-zqz9r (uid: ac473bc7-9c73-4d6c-80b2-75daae6f9d76)", boundByController: true
... skipping 26 lines ...
I0905 20:42:33.514380       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42
I0905 20:42:33.514415       1 pv_controller.go:1435] volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" deleted
I0905 20:42:33.514429       1 pv_controller.go:1283] deleteVolumeOperation [pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: success
I0905 20:42:33.527346       1 pv_protection_controller.go:205] Got event on PV pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42
I0905 20:42:33.527384       1 pv_protection_controller.go:125] Processing PV pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42
I0905 20:42:33.527363       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" with version 3155
I0905 20:42:33.527828       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: phase: Failed, bound to: "azuredisk-1387/pvc-2wmjv (uid: bd538b6d-953e-4cd1-bb23-b7843b55df42)", boundByController: true
I0905 20:42:33.528093       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: volume is bound to claim azuredisk-1387/pvc-2wmjv
I0905 20:42:33.528292       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: claim azuredisk-1387/pvc-2wmjv not found
I0905 20:42:33.528448       1 pv_controller.go:1108] reclaimVolume[pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42]: policy is Delete
I0905 20:42:33.528632       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42[8276bc83-7e0c-45e5-bc9b-7cb1c2597899]]
I0905 20:42:33.528822       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42] started
I0905 20:42:33.534539       1 pv_controller_base.go:235] volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" deleted
I0905 20:42:33.534587       1 pv_controller_base.go:505] deletion of claim "azuredisk-1387/pvc-2wmjv" was already processed
I0905 20:42:33.534898       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42
I0905 20:42:33.534915       1 pv_protection_controller.go:128] Finished processing PV pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42 (7.523025ms)
I0905 20:42:33.534976       1 pv_controller.go:1238] error reading persistent volume "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42": persistentvolumes "pvc-bd538b6d-953e-4cd1-bb23-b7843b55df42" not found
I0905 20:42:37.216576       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0905 20:42:38.234137       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1387/pvc-zqz9r"
I0905 20:42:38.234178       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-1387/pvc-zqz9r"
I0905 20:42:38.234195       1 pvc_protection_controller.go:241] "Looking for Pods using PVC in the Informer's cache" PVC="azuredisk-1387/pvc-zqz9r"
I0905 20:42:38.234208       1 pvc_protection_controller.go:273] "No Pod using PVC was found in the Informer's cache" PVC="azuredisk-1387/pvc-zqz9r"
I0905 20:42:38.234220       1 pvc_protection_controller.go:278] "Looking for Pods using PVC with a live list" PVC="azuredisk-1387/pvc-zqz9r"
... skipping 64 lines ...
I0905 20:42:43.476124       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76[0d31fa8f-1064-4a35-8866-0732513babbf]]
I0905 20:42:43.476143       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76] started
I0905 20:42:43.482221       1 pv_controller_base.go:235] volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" deleted
I0905 20:42:43.482367       1 pv_controller_base.go:505] deletion of claim "azuredisk-1387/pvc-zqz9r" was already processed
I0905 20:42:43.482691       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76
I0905 20:42:43.482715       1 pv_protection_controller.go:128] Finished processing PV pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76 (6.932592ms)
I0905 20:42:43.483915       1 pv_controller.go:1238] error reading persistent volume "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76": persistentvolumes "pvc-ac473bc7-9c73-4d6c-80b2-75daae6f9d76" not found
I0905 20:42:49.049263       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (3.401µs)
I0905 20:42:49.175872       1 publisher.go:186] Finished syncing namespace "azuredisk-4547" (11.30234ms)
I0905 20:42:49.185065       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (20.785877ms)
I0905 20:42:50.500914       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-4547/pvc-8vwlm"
I0905 20:42:50.500914       1 pv_controller_base.go:612] storeObjectUpdate: adding claim "azuredisk-4547/pvc-8vwlm", version 3195
I0905 20:42:50.501106       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 362 lines ...
I0905 20:43:10.554454       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: claim azuredisk-4547/pvc-sstpt not found
I0905 20:43:10.554490       1 pv_controller.go:1108] reclaimVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: policy is Delete
I0905 20:43:10.554548       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]]
I0905 20:43:10.554641       1 pv_controller.go:1763] operation "delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]" is already running, skipping
I0905 20:43:10.557503       1 pv_controller.go:1340] isVolumeReleased[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is released
I0905 20:43:10.557523       1 pv_controller.go:1404] doDeleteVolume [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]
I0905 20:43:10.583521       1 pv_controller.go:1259] deletion of volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:43:10.583544       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: set phase Failed
I0905 20:43:10.583553       1 pv_controller.go:858] updating PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: set phase Failed
I0905 20:43:10.586508       1 pv_protection_controller.go:205] Got event on PV pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde
I0905 20:43:10.586689       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" with version 3288
I0905 20:43:10.586927       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase: Failed, bound to: "azuredisk-4547/pvc-sstpt (uid: 6f8b47e3-70cb-4b07-82c3-eb06cc919cde)", boundByController: true
I0905 20:43:10.587104       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is bound to claim azuredisk-4547/pvc-sstpt
I0905 20:43:10.587251       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: claim azuredisk-4547/pvc-sstpt not found
I0905 20:43:10.587371       1 pv_controller.go:1108] reclaimVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: policy is Delete
I0905 20:43:10.587510       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]]
I0905 20:43:10.587616       1 pv_controller.go:1763] operation "delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]" is already running, skipping
I0905 20:43:10.588210       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" with version 3288
I0905 20:43:10.588368       1 pv_controller.go:879] volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" entered phase "Failed"
I0905 20:43:10.588502       1 pv_controller.go:901] volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:43:10.588638       1 goroutinemap.go:150] Operation for "delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]" failed. No retries permitted until 2022-09-05 20:43:11.088616913 +0000 UTC m=+1165.482359202 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:43:10.588985       1 event.go:291] "Event occurred" object="pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
I0905 20:43:11.952959       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
I0905 20:43:11.952992       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde to the node "capz-06vmzc-md-0-dbrp2" mounted false
I0905 20:43:11.953003       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec to the node "capz-06vmzc-md-0-dbrp2" mounted false
I0905 20:43:11.974511       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde\"}]}}" for node "capz-06vmzc-md-0-dbrp2" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde 1}]
I0905 20:43:11.975583       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec") on node "capz-06vmzc-md-0-dbrp2" 
... skipping 19 lines ...
I0905 20:43:13.271264       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: volume is bound to claim azuredisk-4547/pvc-8vwlm
I0905 20:43:13.271289       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: claim azuredisk-4547/pvc-8vwlm found: phase: Bound, bound to: "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec", bindCompleted: true, boundByController: true
I0905 20:43:13.271312       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: all is bound
I0905 20:43:13.271326       1 pv_controller.go:858] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: set phase Bound
I0905 20:43:13.271338       1 pv_controller.go:861] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: phase Bound already set
I0905 20:43:13.271357       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" with version 3288
I0905 20:43:13.271388       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase: Failed, bound to: "azuredisk-4547/pvc-sstpt (uid: 6f8b47e3-70cb-4b07-82c3-eb06cc919cde)", boundByController: true
I0905 20:43:13.271410       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is bound to claim azuredisk-4547/pvc-sstpt
I0905 20:43:13.271434       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: claim azuredisk-4547/pvc-sstpt not found
I0905 20:43:13.271447       1 pv_controller.go:1108] reclaimVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: policy is Delete
I0905 20:43:13.271465       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]]
I0905 20:43:13.271503       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde] started
I0905 20:43:13.271751       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-4547/pvc-8vwlm" with version 3213
... skipping 11 lines ...
I0905 20:43:13.271930       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm] status: phase Bound already set
I0905 20:43:13.271941       1 pv_controller.go:1038] volume "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec" bound to claim "azuredisk-4547/pvc-8vwlm"
I0905 20:43:13.271958       1 pv_controller.go:1039] volume "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec" status after binding: phase: Bound, bound to: "azuredisk-4547/pvc-8vwlm (uid: d4238a6c-0379-42f8-bfd9-44848d4858ec)", boundByController: true
I0905 20:43:13.271973       1 pv_controller.go:1040] claim "azuredisk-4547/pvc-8vwlm" status after binding: phase: Bound, bound to: "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec", bindCompleted: true, boundByController: true
I0905 20:43:13.277209       1 pv_controller.go:1340] isVolumeReleased[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is released
I0905 20:43:13.277230       1 pv_controller.go:1404] doDeleteVolume [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]
I0905 20:43:13.304410       1 pv_controller.go:1259] deletion of volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:43:13.304437       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: set phase Failed
I0905 20:43:13.304450       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase Failed already set
E0905 20:43:13.304480       1 goroutinemap.go:150] Operation for "delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]" failed. No retries permitted until 2022-09-05 20:43:14.304459248 +0000 UTC m=+1168.698201537 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:43:13.415145       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:43:18.267441       1 gc_controller.go:161] GC'ing orphaned
I0905 20:43:18.267499       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:43:22.959446       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0905 20:43:23.081983       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="61.904µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40100" resp=200
I0905 20:43:27.524469       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec) returned with <nil>
... skipping 26 lines ...
I0905 20:43:28.273059       1 pv_controller.go:1039] volume "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec" status after binding: phase: Bound, bound to: "azuredisk-4547/pvc-8vwlm (uid: d4238a6c-0379-42f8-bfd9-44848d4858ec)", boundByController: true
I0905 20:43:28.273074       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: all is bound
I0905 20:43:28.273076       1 pv_controller.go:1040] claim "azuredisk-4547/pvc-8vwlm" status after binding: phase: Bound, bound to: "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec", bindCompleted: true, boundByController: true
I0905 20:43:28.273081       1 pv_controller.go:858] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: set phase Bound
I0905 20:43:28.273090       1 pv_controller.go:861] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: phase Bound already set
I0905 20:43:28.273154       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" with version 3288
I0905 20:43:28.273185       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase: Failed, bound to: "azuredisk-4547/pvc-sstpt (uid: 6f8b47e3-70cb-4b07-82c3-eb06cc919cde)", boundByController: true
I0905 20:43:28.273209       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is bound to claim azuredisk-4547/pvc-sstpt
I0905 20:43:28.273233       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: claim azuredisk-4547/pvc-sstpt not found
I0905 20:43:28.273242       1 pv_controller.go:1108] reclaimVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: policy is Delete
I0905 20:43:28.273260       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]]
I0905 20:43:28.273302       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde] started
I0905 20:43:28.285478       1 pv_controller.go:1340] isVolumeReleased[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is released
I0905 20:43:28.285499       1 pv_controller.go:1404] doDeleteVolume [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]
I0905 20:43:28.285535       1 pv_controller.go:1259] deletion of volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) since it's in attaching or detaching state
I0905 20:43:28.285553       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: set phase Failed
I0905 20:43:28.285563       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase Failed already set
E0905 20:43:28.285602       1 goroutinemap.go:150] Operation for "delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]" failed. No retries permitted until 2022-09-05 20:43:30.285578947 +0000 UTC m=+1184.679321336 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) since it's in attaching or detaching state
I0905 20:43:29.319031       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0905 20:43:32.760902       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 32 items received
I0905 20:43:33.081240       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="117.507µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41538" resp=200
I0905 20:43:37.999152       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) returned with <nil>
I0905 20:43:37.999195       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde) succeeded
I0905 20:43:37.999206       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde was detached from node:capz-06vmzc-md-0-dbrp2
... skipping 15 lines ...
I0905 20:43:43.272978       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm]: volume "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec" found: phase: Bound, bound to: "azuredisk-4547/pvc-8vwlm (uid: d4238a6c-0379-42f8-bfd9-44848d4858ec)", boundByController: true
I0905 20:43:43.272988       1 pv_controller.go:861] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: phase Bound already set
I0905 20:43:43.272990       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm]: claim is already correctly bound
I0905 20:43:43.273001       1 pv_controller.go:1012] binding volume "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec" to claim "azuredisk-4547/pvc-8vwlm"
I0905 20:43:43.273002       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" with version 3288
I0905 20:43:43.273012       1 pv_controller.go:910] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: binding to "azuredisk-4547/pvc-8vwlm"
I0905 20:43:43.273026       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase: Failed, bound to: "azuredisk-4547/pvc-sstpt (uid: 6f8b47e3-70cb-4b07-82c3-eb06cc919cde)", boundByController: true
I0905 20:43:43.273045       1 pv_controller.go:922] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: already bound to "azuredisk-4547/pvc-8vwlm"
I0905 20:43:43.273082       1 pv_controller.go:858] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: set phase Bound
I0905 20:43:43.273090       1 pv_controller.go:861] updating PersistentVolume[pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec]: phase Bound already set
I0905 20:43:43.273096       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm]: binding to "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec"
I0905 20:43:43.273109       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm]: already bound to "pvc-d4238a6c-0379-42f8-bfd9-44848d4858ec"
I0905 20:43:43.273116       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-4547/pvc-8vwlm] status: set phase Bound
... skipping 12 lines ...
I0905 20:43:48.619982       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde
I0905 20:43:48.620016       1 pv_controller.go:1435] volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" deleted
I0905 20:43:48.620028       1 pv_controller.go:1283] deleteVolumeOperation [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: success
I0905 20:43:48.625382       1 pv_protection_controller.go:205] Got event on PV pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde
I0905 20:43:48.625684       1 pv_protection_controller.go:125] Processing PV pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde
I0905 20:43:48.626109       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" with version 3346
I0905 20:43:48.626146       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: phase: Failed, bound to: "azuredisk-4547/pvc-sstpt (uid: 6f8b47e3-70cb-4b07-82c3-eb06cc919cde)", boundByController: true
I0905 20:43:48.626174       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: volume is bound to claim azuredisk-4547/pvc-sstpt
I0905 20:43:48.626196       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: claim azuredisk-4547/pvc-sstpt not found
I0905 20:43:48.626208       1 pv_controller.go:1108] reclaimVolume[pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde]: policy is Delete
I0905 20:43:48.626225       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde[464c5be4-d158-4016-a792-a50a3da11d2f]]
I0905 20:43:48.626255       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde] started
I0905 20:43:48.629530       1 pv_controller.go:1243] Volume "pvc-6f8b47e3-70cb-4b07-82c3-eb06cc919cde" is already being deleted
... skipping 174 lines ...
I0905 20:44:08.003758       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-7578/pvc-z6sq9[a43868f5-41d8-41a2-bf6c-10a2e71a43a2]]
I0905 20:44:08.003765       1 pv_controller.go:1763] operation "provision-azuredisk-7578/pvc-z6sq9[a43868f5-41d8-41a2-bf6c-10a2e71a43a2]" is already running, skipping
I0905 20:44:08.003993       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-06vmzc-dynamic-pvc-651797ee-811a-4e54-9db6-dff5b999826c StorageAccountType:Standard_LRS Size:10
I0905 20:44:08.004396       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 StorageAccountType:Premium_LRS Size:10
I0905 20:44:08.010572       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2 StorageAccountType:StandardSSD_LRS Size:10
I0905 20:44:08.095416       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4547, name default-token-5dbb2, uid ca290b00-6512-4548-9452-e132c7a7792f, event type delete
E0905 20:44:08.119314       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4547/default: secrets "default-token-kdc57" is forbidden: unable to create new content in namespace azuredisk-4547 because it is being terminated
I0905 20:44:08.123684       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4547, name kube-root-ca.crt, uid a7a95f98-0fba-47e2-94a6-b4c0b236a550, event type delete
I0905 20:44:08.133241       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-f2zqh.1712119bed366f14, uid e9807952-5337-4eb7-9164-90bb483f0d46, event type delete
I0905 20:44:08.133485       1 publisher.go:186] Finished syncing namespace "azuredisk-4547" (9.750952ms)
I0905 20:44:08.142969       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-f2zqh.1712119d3b2959d3, uid b6986aeb-0e74-4d59-9bf8-75ceb09b51d6, event type delete
I0905 20:44:08.147036       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-f2zqh.1712119e7eb70ebb, uid b8066208-0bbb-4657-ab05-e9e18e30cf9d, event type delete
I0905 20:44:08.153457       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-f2zqh.1712119eacbb0769, uid 8d7f5d45-b40d-4e2c-b187-dc21bae1a353, event type delete
... skipping 10 lines ...
I0905 20:44:08.237063       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (2µs)
I0905 20:44:08.299746       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4547, estimate: 0, errors: <nil>
I0905 20:44:08.300754       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4547" (2.9µs)
I0905 20:44:08.310795       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4547" (401.246532ms)
I0905 20:44:09.419376       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0905 20:44:09.470471       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-qzs57, uid 002689bb-b54b-476b-9708-f0c5f7de533e, event type delete
E0905 20:44:09.485552       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-gn6sz" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0905 20:44:09.510577       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0905 20:44:09.510637       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid 0946c5cb-2657-479c-b7fb-4cc926a70c69, event type delete
I0905 20:44:09.510668       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (1.6µs)
I0905 20:44:09.516687       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid 59a5666b-3b10-47fd-9444-50f41c69bbfc, event type delete
I0905 20:44:09.520739       1 publisher.go:186] Finished syncing namespace "azuredisk-7051" (4.000827ms)
I0905 20:44:09.587461       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7051, estimate: 0, errors: <nil>
... skipping 171 lines ...
I0905 20:44:10.599442       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-z6sq9] status: phase Bound already set
I0905 20:44:10.599455       1 pv_controller.go:1038] volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" bound to claim "azuredisk-7578/pvc-z6sq9"
I0905 20:44:10.599473       1 pv_controller.go:1039] volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-z6sq9 (uid: a43868f5-41d8-41a2-bf6c-10a2e71a43a2)", boundByController: true
I0905 20:44:10.599490       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-z6sq9" status after binding: phase: Bound, bound to: "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2", bindCompleted: true, boundByController: true
I0905 20:44:10.918699       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9183
I0905 20:44:10.941789       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9183, name default-token-5gvj4, uid beab935e-57d0-4362-b68c-bda3bcabdfe1, event type delete
E0905 20:44:10.958922       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9183/default: secrets "default-token-j2z6r" is forbidden: unable to create new content in namespace azuredisk-9183 because it is being terminated
I0905 20:44:10.972416       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-7dv5s"
I0905 20:44:10.972452       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-7dv5s, PodDisruptionBudget controller will avoid syncing.
I0905 20:44:10.972665       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-7578/azuredisk-volume-tester-7dv5s"
I0905 20:44:10.972460       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-7dv5s"
I0905 20:44:10.991637       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-7dv5s"
I0905 20:44:10.992011       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-7dv5s, PodDisruptionBudget controller will avoid syncing.
... skipping 295 lines ...
I0905 20:44:38.239665       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: claim azuredisk-7578/pvc-z6sq9 not found
I0905 20:44:38.239672       1 pv_controller.go:1108] reclaimVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: policy is Delete
I0905 20:44:38.239684       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]]
I0905 20:44:38.239692       1 pv_controller.go:1763] operation "delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]" is already running, skipping
I0905 20:44:38.241841       1 pv_controller.go:1340] isVolumeReleased[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is released
I0905 20:44:38.241859       1 pv_controller.go:1404] doDeleteVolume [pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]
I0905 20:44:38.267646       1 pv_controller.go:1259] deletion of volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:44:38.267670       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: set phase Failed
I0905 20:44:38.267679       1 pv_controller.go:858] updating PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: set phase Failed
I0905 20:44:38.269933       1 gc_controller.go:161] GC'ing orphaned
I0905 20:44:38.269964       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:44:38.271400       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" with version 3530
I0905 20:44:38.271428       1 pv_controller.go:879] volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" entered phase "Failed"
I0905 20:44:38.271438       1 pv_controller.go:901] volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:44:38.271477       1 goroutinemap.go:150] Operation for "delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]" failed. No retries permitted until 2022-09-05 20:44:38.771458054 +0000 UTC m=+1253.165200343 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:44:38.271719       1 event.go:291] "Event occurred" object="pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
I0905 20:44:38.271863       1 pv_protection_controller.go:205] Got event on PV pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2
I0905 20:44:38.271886       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" with version 3530
I0905 20:44:38.271904       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: phase: Failed, bound to: "azuredisk-7578/pvc-z6sq9 (uid: a43868f5-41d8-41a2-bf6c-10a2e71a43a2)", boundByController: true
I0905 20:44:38.271920       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is bound to claim azuredisk-7578/pvc-z6sq9
I0905 20:44:38.271934       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: claim azuredisk-7578/pvc-z6sq9 not found
I0905 20:44:38.271941       1 pv_controller.go:1108] reclaimVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: policy is Delete
I0905 20:44:38.272095       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]]
I0905 20:44:38.272105       1 pv_controller.go:1765] operation "delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]" postponed due to exponential backoff
I0905 20:44:42.053792       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-md-0-dbrp2"
... skipping 56 lines ...
I0905 20:44:43.276239       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-7578/pvc-hgfdj] status: set phase Bound
I0905 20:44:43.276208       1 pv_controller.go:858] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: set phase Bound
I0905 20:44:43.276257       1 pv_controller.go:861] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: phase Bound already set
I0905 20:44:43.276263       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-hgfdj] status: phase Bound already set
I0905 20:44:43.276271       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" with version 3530
I0905 20:44:43.276276       1 pv_controller.go:1038] volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" bound to claim "azuredisk-7578/pvc-hgfdj"
I0905 20:44:43.276291       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: phase: Failed, bound to: "azuredisk-7578/pvc-z6sq9 (uid: a43868f5-41d8-41a2-bf6c-10a2e71a43a2)", boundByController: true
I0905 20:44:43.276295       1 pv_controller.go:1039] volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-hgfdj (uid: 115305d7-ae4a-433a-b58d-ee6be4a0abd4)", boundByController: true
I0905 20:44:43.276311       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is bound to claim azuredisk-7578/pvc-z6sq9
I0905 20:44:43.276312       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-hgfdj" status after binding: phase: Bound, bound to: "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4", bindCompleted: true, boundByController: true
I0905 20:44:43.276328       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-8njq6" with version 3455
I0905 20:44:43.276331       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: claim azuredisk-7578/pvc-z6sq9 not found
I0905 20:44:43.276339       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-8njq6]: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
... skipping 13 lines ...
I0905 20:44:43.276815       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-8njq6] status: phase Bound already set
I0905 20:44:43.276826       1 pv_controller.go:1038] volume "pvc-651797ee-811a-4e54-9db6-dff5b999826c" bound to claim "azuredisk-7578/pvc-8njq6"
I0905 20:44:43.276846       1 pv_controller.go:1039] volume "pvc-651797ee-811a-4e54-9db6-dff5b999826c" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-8njq6 (uid: 651797ee-811a-4e54-9db6-dff5b999826c)", boundByController: true
I0905 20:44:43.276865       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-8njq6" status after binding: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
I0905 20:44:43.279622       1 pv_controller.go:1340] isVolumeReleased[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is released
I0905 20:44:43.279653       1 pv_controller.go:1404] doDeleteVolume [pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]
I0905 20:44:43.279692       1 pv_controller.go:1259] deletion of volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) since it's in attaching or detaching state
I0905 20:44:43.279710       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: set phase Failed
I0905 20:44:43.279719       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: phase Failed already set
E0905 20:44:43.279752       1 goroutinemap.go:150] Operation for "delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]" failed. No retries permitted until 2022-09-05 20:44:44.279727668 +0000 UTC m=+1258.673469957 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) since it's in attaching or detaching state
I0905 20:44:43.428563       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-md-0-dbrp2 ReadyCondition updated. Updating timestamp.
I0905 20:44:46.622693       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 1 items received
I0905 20:44:53.092473       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.304µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40184" resp=200
I0905 20:44:55.378335       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 6 items received
I0905 20:44:57.730143       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) returned with <nil>
I0905 20:44:57.730201       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2) succeeded
... skipping 50 lines ...
I0905 20:44:58.277122       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: volume is bound to claim azuredisk-7578/pvc-8njq6
I0905 20:44:58.277137       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: claim azuredisk-7578/pvc-8njq6 found: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
I0905 20:44:58.277152       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: all is bound
I0905 20:44:58.277160       1 pv_controller.go:858] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: set phase Bound
I0905 20:44:58.277170       1 pv_controller.go:861] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: phase Bound already set
I0905 20:44:58.277181       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" with version 3530
I0905 20:44:58.277200       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: phase: Failed, bound to: "azuredisk-7578/pvc-z6sq9 (uid: a43868f5-41d8-41a2-bf6c-10a2e71a43a2)", boundByController: true
I0905 20:44:58.277220       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is bound to claim azuredisk-7578/pvc-z6sq9
I0905 20:44:58.277244       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: claim azuredisk-7578/pvc-z6sq9 not found
I0905 20:44:58.277252       1 pv_controller.go:1108] reclaimVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: policy is Delete
I0905 20:44:58.277268       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]]
I0905 20:44:58.277316       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2] started
I0905 20:44:58.286989       1 pv_controller.go:1340] isVolumeReleased[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is released
... skipping 3 lines ...
I0905 20:45:03.593237       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2
I0905 20:45:03.593275       1 pv_controller.go:1435] volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" deleted
I0905 20:45:03.593295       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: success
I0905 20:45:03.600646       1 pv_protection_controller.go:205] Got event on PV pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2
I0905 20:45:03.600902       1 pv_protection_controller.go:125] Processing PV pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2
I0905 20:45:03.601438       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" with version 3573
I0905 20:45:03.601662       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: phase: Failed, bound to: "azuredisk-7578/pvc-z6sq9 (uid: a43868f5-41d8-41a2-bf6c-10a2e71a43a2)", boundByController: true
I0905 20:45:03.601877       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: volume is bound to claim azuredisk-7578/pvc-z6sq9
I0905 20:45:03.602069       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: claim azuredisk-7578/pvc-z6sq9 not found
I0905 20:45:03.602239       1 pv_controller.go:1108] reclaimVolume[pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2]: policy is Delete
I0905 20:45:03.602413       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]]
I0905 20:45:03.602578       1 pv_controller.go:1763] operation "delete-pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2[dc164a89-3bd4-4bc1-b62d-50bbdc5383da]" is already running, skipping
I0905 20:45:03.606953       1 pv_controller_base.go:235] volume "pvc-a43868f5-41d8-41a2-bf6c-10a2e71a43a2" deleted
... skipping 44 lines ...
I0905 20:45:04.465262       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: claim azuredisk-7578/pvc-hgfdj not found
I0905 20:45:04.465381       1 pv_controller.go:1108] reclaimVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: policy is Delete
I0905 20:45:04.465405       1 pv_controller.go:1752] scheduleOperation[delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]]
I0905 20:45:04.465413       1 pv_controller.go:1763] operation "delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]" is already running, skipping
I0905 20:45:04.467268       1 pv_controller.go:1340] isVolumeReleased[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is released
I0905 20:45:04.467287       1 pv_controller.go:1404] doDeleteVolume [pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]
I0905 20:45:04.490909       1 pv_controller.go:1259] deletion of volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:45:04.490931       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: set phase Failed
I0905 20:45:04.490941       1 pv_controller.go:858] updating PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: set phase Failed
I0905 20:45:04.496518       1 pv_protection_controller.go:205] Got event on PV pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4
I0905 20:45:04.496661       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" with version 3579
I0905 20:45:04.496766       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase: Failed, bound to: "azuredisk-7578/pvc-hgfdj (uid: 115305d7-ae4a-433a-b58d-ee6be4a0abd4)", boundByController: true
I0905 20:45:04.496854       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is bound to claim azuredisk-7578/pvc-hgfdj
I0905 20:45:04.496946       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: claim azuredisk-7578/pvc-hgfdj not found
I0905 20:45:04.497067       1 pv_controller.go:1108] reclaimVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: policy is Delete
I0905 20:45:04.497157       1 pv_controller.go:1752] scheduleOperation[delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]]
I0905 20:45:04.497205       1 pv_controller.go:1763] operation "delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]" is already running, skipping
I0905 20:45:04.497409       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" with version 3579
I0905 20:45:04.497502       1 pv_controller.go:879] volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" entered phase "Failed"
I0905 20:45:04.497538       1 pv_controller.go:901] volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
E0905 20:45:04.497699       1 goroutinemap.go:150] Operation for "delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]" failed. No retries permitted until 2022-09-05 20:45:04.997676295 +0000 UTC m=+1279.391418584 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted
I0905 20:45:04.497920       1 event.go:291] "Event occurred" object="pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/virtualMachines/capz-06vmzc-md-0-dbrp2), could not be deleted"
I0905 20:45:08.475178       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0905 20:45:13.088816       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="92.506µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48174" resp=200
I0905 20:45:13.166510       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-651797ee-811a-4e54-9db6-dff5b999826c) returned with <nil>
I0905 20:45:13.166563       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-651797ee-811a-4e54-9db6-dff5b999826c) succeeded
I0905 20:45:13.166578       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-651797ee-811a-4e54-9db6-dff5b999826c was detached from node:capz-06vmzc-md-0-dbrp2
... skipping 2 lines ...
I0905 20:45:13.204427       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4"
I0905 20:45:13.204461       1 azure_controller_standard.go:166] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4)
I0905 20:45:13.276648       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:45:13.276735       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" with version 3579
I0905 20:45:13.276756       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-8njq6" with version 3455
I0905 20:45:13.276778       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-8njq6]: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
I0905 20:45:13.276780       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase: Failed, bound to: "azuredisk-7578/pvc-hgfdj (uid: 115305d7-ae4a-433a-b58d-ee6be4a0abd4)", boundByController: true
I0905 20:45:13.276811       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is bound to claim azuredisk-7578/pvc-hgfdj
I0905 20:45:13.276836       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: claim azuredisk-7578/pvc-hgfdj not found
I0905 20:45:13.276842       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-8njq6]: volume "pvc-651797ee-811a-4e54-9db6-dff5b999826c" found: phase: Bound, bound to: "azuredisk-7578/pvc-8njq6 (uid: 651797ee-811a-4e54-9db6-dff5b999826c)", boundByController: true
I0905 20:45:13.276846       1 pv_controller.go:1108] reclaimVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: policy is Delete
I0905 20:45:13.276855       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-8njq6]: claim is already correctly bound
I0905 20:45:13.276866       1 pv_controller.go:1752] scheduleOperation[delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]]
... skipping 16 lines ...
I0905 20:45:13.277190       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: claim azuredisk-7578/pvc-8njq6 found: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
I0905 20:45:13.277207       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: all is bound
I0905 20:45:13.277214       1 pv_controller.go:858] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: set phase Bound
I0905 20:45:13.277224       1 pv_controller.go:861] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: phase Bound already set
I0905 20:45:13.283033       1 pv_controller.go:1340] isVolumeReleased[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is released
I0905 20:45:13.283054       1 pv_controller.go:1404] doDeleteVolume [pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]
I0905 20:45:13.283093       1 pv_controller.go:1259] deletion of volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) since it's in attaching or detaching state
I0905 20:45:13.283108       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: set phase Failed
I0905 20:45:13.283119       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase Failed already set
E0905 20:45:13.283162       1 goroutinemap.go:150] Operation for "delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]" failed. No retries permitted until 2022-09-05 20:45:14.283128441 +0000 UTC m=+1288.676870830 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) since it's in attaching or detaching state
I0905 20:45:15.147430       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolume total 68 items received
I0905 20:45:18.208882       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0905 20:45:18.271111       1 gc_controller.go:161] GC'ing orphaned
I0905 20:45:18.271178       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0905 20:45:18.546105       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-06vmzc-control-plane-tgzc4"
I0905 20:45:23.081217       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="101.105µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56526" resp=200
I0905 20:45:23.435543       1 node_lifecycle_controller.go:1047] Node capz-06vmzc-control-plane-tgzc4 ReadyCondition updated. Updating timestamp.
I0905 20:45:28.162451       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:45:28.185799       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:45:28.277108       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:45:28.277196       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" with version 3579
I0905 20:45:28.277240       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase: Failed, bound to: "azuredisk-7578/pvc-hgfdj (uid: 115305d7-ae4a-433a-b58d-ee6be4a0abd4)", boundByController: true
I0905 20:45:28.277285       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is bound to claim azuredisk-7578/pvc-hgfdj
I0905 20:45:28.277326       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: claim azuredisk-7578/pvc-hgfdj not found
I0905 20:45:28.277336       1 pv_controller.go:1108] reclaimVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: policy is Delete
I0905 20:45:28.277353       1 pv_controller.go:1752] scheduleOperation[delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]]
I0905 20:45:28.277371       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-651797ee-811a-4e54-9db6-dff5b999826c" with version 3452
I0905 20:45:28.277196       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-8njq6" with version 3455
... skipping 18 lines ...
I0905 20:45:28.277642       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-8njq6" status after binding: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
I0905 20:45:28.277459       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: all is bound
I0905 20:45:28.277658       1 pv_controller.go:858] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: set phase Bound
I0905 20:45:28.277667       1 pv_controller.go:861] updating PersistentVolume[pvc-651797ee-811a-4e54-9db6-dff5b999826c]: phase Bound already set
I0905 20:45:28.299063       1 pv_controller.go:1340] isVolumeReleased[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is released
I0905 20:45:28.299088       1 pv_controller.go:1404] doDeleteVolume [pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]
I0905 20:45:28.299120       1 pv_controller.go:1259] deletion of volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) since it's in attaching or detaching state
I0905 20:45:28.299136       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: set phase Failed
I0905 20:45:28.299151       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase Failed already set
E0905 20:45:28.299207       1 goroutinemap.go:150] Operation for "delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]" failed. No retries permitted until 2022-09-05 20:45:30.299175734 +0000 UTC m=+1304.692918123 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) since it's in attaching or detaching state
I0905 20:45:28.696398       1 azure_controller_standard.go:184] azureDisk - update(capz-06vmzc): vm(capz-06vmzc-md-0-dbrp2) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) returned with <nil>
I0905 20:45:28.696466       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4) succeeded
I0905 20:45:28.696479       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4 was detached from node:capz-06vmzc-md-0-dbrp2
I0905 20:45:28.696511       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4") on node "capz-06vmzc-md-0-dbrp2" 
I0905 20:45:29.402463       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0905 20:45:31.159481       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 0 items received
... skipping 9 lines ...
I0905 20:45:39.158418       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 3 items received
I0905 20:45:39.772692       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 1 items received
I0905 20:45:43.081512       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="106.406µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55004" resp=200
I0905 20:45:43.186479       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:45:43.277413       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:45:43.277490       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" with version 3579
I0905 20:45:43.277567       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase: Failed, bound to: "azuredisk-7578/pvc-hgfdj (uid: 115305d7-ae4a-433a-b58d-ee6be4a0abd4)", boundByController: true
I0905 20:45:43.277529       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-8njq6" with version 3455
I0905 20:45:43.277594       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-8njq6]: phase: Bound, bound to: "pvc-651797ee-811a-4e54-9db6-dff5b999826c", bindCompleted: true, boundByController: true
I0905 20:45:43.277600       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is bound to claim azuredisk-7578/pvc-hgfdj
I0905 20:45:43.277619       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: claim azuredisk-7578/pvc-hgfdj not found
I0905 20:45:43.277631       1 pv_controller.go:1108] reclaimVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: policy is Delete
I0905 20:45:43.278167       1 pv_controller.go:1752] scheduleOperation[delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]]
... skipping 24 lines ...
I0905 20:45:48.556672       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4
I0905 20:45:48.556708       1 pv_controller.go:1435] volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" deleted
I0905 20:45:48.556719       1 pv_controller.go:1283] deleteVolumeOperation [pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: success
I0905 20:45:48.580092       1 pv_protection_controller.go:205] Got event on PV pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4
I0905 20:45:48.580394       1 pv_protection_controller.go:125] Processing PV pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4
I0905 20:45:48.580879       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" with version 3643
I0905 20:45:48.580942       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: phase: Failed, bound to: "azuredisk-7578/pvc-hgfdj (uid: 115305d7-ae4a-433a-b58d-ee6be4a0abd4)", boundByController: true
I0905 20:45:48.580980       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: volume is bound to claim azuredisk-7578/pvc-hgfdj
I0905 20:45:48.581008       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: claim azuredisk-7578/pvc-hgfdj not found
I0905 20:45:48.581021       1 pv_controller.go:1108] reclaimVolume[pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4]: policy is Delete
I0905 20:45:48.581037       1 pv_controller.go:1752] scheduleOperation[delete-pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4[0d0f4988-4027-450f-ba9f-6b1761b73ebe]]
I0905 20:45:48.581074       1 pv_controller.go:1231] deleteVolumeOperation [pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4] started
I0905 20:45:48.585306       1 pv_controller.go:1243] Volume "pvc-115305d7-ae4a-433a-b58d-ee6be4a0abd4" is already being deleted
... skipping 113 lines ...
I0905 20:46:07.903574       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (2.9µs)
I0905 20:46:08.027452       1 publisher.go:186] Finished syncing namespace "azuredisk-8666" (10.760309ms)
I0905 20:46:08.075042       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8666" (58.63292ms)
I0905 20:46:08.431678       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1968
I0905 20:46:08.466124       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1968, name default-token-bpkp5, uid 500e1786-3a25-4e1a-8975-cef006fd639b, event type delete
I0905 20:46:08.475444       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1968, name kube-root-ca.crt, uid 7c37db6b-bc4d-47d5-8987-92f88acf64a8, event type delete
E0905 20:46:08.479594       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1968/default: secrets "default-token-nfwvm" is forbidden: unable to create new content in namespace azuredisk-1968 because it is being terminated
I0905 20:46:08.481330       1 publisher.go:186] Finished syncing namespace "azuredisk-1968" (2.774857ms)
I0905 20:46:08.485678       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1968/default), service account deleted, removing tokens
I0905 20:46:08.485745       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1968, name default, uid 26e094ff-f244-49a2-b0d5-645fd8a9dfc9, event type delete
I0905 20:46:08.485781       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1968" (1.6µs)
I0905 20:46:08.578677       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1968" (3.7µs)
I0905 20:46:08.579725       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1968, estimate: 0, errors: <nil>
... skipping 388 lines ...
I0905 20:47:14.865976       1 pv_controller.go:1231] deleteVolumeOperation [pvc-69a06362-035e-4dd8-a957-f505ed687eac] started
I0905 20:47:14.867136       1 pv_controller.go:1108] reclaimVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: policy is Delete
I0905 20:47:14.867473       1 pv_controller.go:1752] scheduleOperation[delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]]
I0905 20:47:14.867602       1 pv_controller.go:1763] operation "delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]" is already running, skipping
I0905 20:47:14.874737       1 pv_controller.go:1340] isVolumeReleased[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: volume is released
I0905 20:47:14.874756       1 pv_controller.go:1404] doDeleteVolume [pvc-69a06362-035e-4dd8-a957-f505ed687eac]
I0905 20:47:14.874868       1 pv_controller.go:1259] deletion of volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-69a06362-035e-4dd8-a957-f505ed687eac) since it's in attaching or detaching state
I0905 20:47:14.874892       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: set phase Failed
I0905 20:47:14.874921       1 pv_controller.go:858] updating PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: set phase Failed
I0905 20:47:14.878083       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" with version 3883
I0905 20:47:14.878306       1 pv_controller.go:879] volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" entered phase "Failed"
I0905 20:47:14.878486       1 pv_controller.go:901] volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-69a06362-035e-4dd8-a957-f505ed687eac) since it's in attaching or detaching state
E0905 20:47:14.878695       1 goroutinemap.go:150] Operation for "delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]" failed. No retries permitted until 2022-09-05 20:47:15.378668723 +0000 UTC m=+1409.772411012 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-69a06362-035e-4dd8-a957-f505ed687eac) since it's in attaching or detaching state
I0905 20:47:14.879009       1 event.go:291] "Event occurred" object="pvc-69a06362-035e-4dd8-a957-f505ed687eac" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-69a06362-035e-4dd8-a957-f505ed687eac) since it's in attaching or detaching state"
I0905 20:47:14.879288       1 pv_protection_controller.go:205] Got event on PV pvc-69a06362-035e-4dd8-a957-f505ed687eac
I0905 20:47:14.879459       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" with version 3883
I0905 20:47:14.879620       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: phase: Failed, bound to: "azuredisk-8666/pvc-jvc92 (uid: 69a06362-035e-4dd8-a957-f505ed687eac)", boundByController: true
I0905 20:47:14.879742       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: volume is bound to claim azuredisk-8666/pvc-jvc92
I0905 20:47:14.879876       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: claim azuredisk-8666/pvc-jvc92 not found
I0905 20:47:14.879982       1 pv_controller.go:1108] reclaimVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: policy is Delete
I0905 20:47:14.880125       1 pv_controller.go:1752] scheduleOperation[delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]]
I0905 20:47:14.880234       1 pv_controller.go:1765] operation "delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]" postponed due to exponential backoff
I0905 20:47:15.134412       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 4 items received
... skipping 16 lines ...
I0905 20:47:24.168980       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 0 items received
I0905 20:47:28.151790       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 11 items received
I0905 20:47:28.165253       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:47:28.190397       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:47:28.281743       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:47:28.281812       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" with version 3883
I0905 20:47:28.282022       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: phase: Failed, bound to: "azuredisk-8666/pvc-jvc92 (uid: 69a06362-035e-4dd8-a957-f505ed687eac)", boundByController: true
I0905 20:47:28.282068       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: volume is bound to claim azuredisk-8666/pvc-jvc92
I0905 20:47:28.282096       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: claim azuredisk-8666/pvc-jvc92 not found
I0905 20:47:28.282109       1 pv_controller.go:1108] reclaimVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: policy is Delete
I0905 20:47:28.282127       1 pv_controller.go:1752] scheduleOperation[delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]]
I0905 20:47:28.282168       1 pv_controller.go:1231] deleteVolumeOperation [pvc-69a06362-035e-4dd8-a957-f505ed687eac] started
I0905 20:47:28.285172       1 pv_controller.go:1340] isVolumeReleased[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: volume is released
... skipping 7 lines ...
I0905 20:47:33.572371       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-69a06362-035e-4dd8-a957-f505ed687eac
I0905 20:47:33.572403       1 pv_controller.go:1435] volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" deleted
I0905 20:47:33.572415       1 pv_controller.go:1283] deleteVolumeOperation [pvc-69a06362-035e-4dd8-a957-f505ed687eac]: success
I0905 20:47:33.581138       1 pv_protection_controller.go:205] Got event on PV pvc-69a06362-035e-4dd8-a957-f505ed687eac
I0905 20:47:33.581283       1 pv_protection_controller.go:125] Processing PV pvc-69a06362-035e-4dd8-a957-f505ed687eac
I0905 20:47:33.581774       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" with version 3909
I0905 20:47:33.581821       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: phase: Failed, bound to: "azuredisk-8666/pvc-jvc92 (uid: 69a06362-035e-4dd8-a957-f505ed687eac)", boundByController: true
I0905 20:47:33.581848       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: volume is bound to claim azuredisk-8666/pvc-jvc92
I0905 20:47:33.581975       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: claim azuredisk-8666/pvc-jvc92 not found
I0905 20:47:33.582099       1 pv_controller.go:1108] reclaimVolume[pvc-69a06362-035e-4dd8-a957-f505ed687eac]: policy is Delete
I0905 20:47:33.582126       1 pv_controller.go:1752] scheduleOperation[delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]]
I0905 20:47:33.582135       1 pv_controller.go:1763] operation "delete-pvc-69a06362-035e-4dd8-a957-f505ed687eac[29990d8c-1fe5-4496-8489-8695374c9b95]" is already running, skipping
I0905 20:47:33.591960       1 pv_controller_base.go:235] volume "pvc-69a06362-035e-4dd8-a957-f505ed687eac" deleted
... skipping 385 lines ...
I0905 20:48:27.881096       1 stateful_set_control.go:376] StatefulSet azuredisk-7886/azuredisk-volume-tester-j4nwf has 1 unhealthy Pods starting with azuredisk-volume-tester-j4nwf-0
I0905 20:48:27.881125       1 stateful_set_control.go:451] StatefulSet azuredisk-7886/azuredisk-volume-tester-j4nwf is waiting for Pod azuredisk-volume-tester-j4nwf-0 to be Running and Ready
I0905 20:48:27.881134       1 stateful_set_control.go:112] StatefulSet azuredisk-7886/azuredisk-volume-tester-j4nwf pod status replicas=1 ready=0 current=1 updated=1
I0905 20:48:27.881142       1 stateful_set_control.go:120] StatefulSet azuredisk-7886/azuredisk-volume-tester-j4nwf revisions current=azuredisk-volume-tester-j4nwf-5ccc4f5bfd update=azuredisk-volume-tester-j4nwf-5ccc4f5bfd
I0905 20:48:27.881166       1 stateful_set.go:477] Successfully synced StatefulSet azuredisk-7886/azuredisk-volume-tester-j4nwf successful
I0905 20:48:27.881181       1 stateful_set.go:431] Finished syncing statefulset "azuredisk-7886/azuredisk-volume-tester-j4nwf" (1.534387ms)
W0905 20:48:27.933872       1 reconciler.go:344] Multi-Attach error for volume "pvc-7177a647-e4da-4ffb-bfce-3e867e3c8640" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-06vmzc/providers/Microsoft.Compute/disks/capz-06vmzc-dynamic-pvc-7177a647-e4da-4ffb-bfce-3e867e3c8640") from node "capz-06vmzc-md-0-bv5pc" Volume is already exclusively attached to node capz-06vmzc-md-0-dbrp2 and can't be attached to another
I0905 20:48:27.934172       1 event.go:291] "Event occurred" object="azuredisk-7886/azuredisk-volume-tester-j4nwf-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-7177a647-e4da-4ffb-bfce-3e867e3c8640\" Volume is already exclusively attached to one node and can't be attached to another"
I0905 20:48:28.166178       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:48:28.192429       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 20:48:28.282775       1 pv_controller_base.go:528] resyncing PV controller
I0905 20:48:28.282924       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7886/pvc-azuredisk-volume-tester-j4nwf-0" with version 3941
I0905 20:48:28.282994       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7886/pvc-azuredisk-volume-tester-j4nwf-0]: phase: Bound, bound to: "pvc-7177a647-e4da-4ffb-bfce-3e867e3c8640", bindCompleted: true, boundByController: true
I0905 20:48:28.283098       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-7886/pvc-azuredisk-volume-tester-j4nwf-0]: volume "pvc-7177a647-e4da-4ffb-bfce-3e867e3c8640" found: phase: Bound, bound to: "azuredisk-7886/pvc-azuredisk-volume-tester-j4nwf-0 (uid: 7177a647-e4da-4ffb-bfce-3e867e3c8640)", boundByController: true
... skipping 216 lines ...
I0905 20:49:10.606670       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7886" (259.149398ms)
I0905 20:49:10.606679       1 namespace_controller.go:157] Content remaining in namespace azuredisk-7886, waiting 16 seconds
I0905 20:49:11.862941       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9103
I0905 20:49:11.928149       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9103, name kube-root-ca.crt, uid 3c0f4eda-882e-463b-9e4c-41608c473157, event type delete
I0905 20:49:11.931798       1 publisher.go:186] Finished syncing namespace "azuredisk-9103" (3.591604ms)
I0905 20:49:11.986661       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9103, name default-token-cd7lt, uid 41544b73-8cf9-443e-8c11-5ec2c7d3aa5e, event type delete
E0905 20:49:12.000892       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9103/default: secrets "default-token-xj8dr" is forbidden: unable to create new content in namespace azuredisk-9103 because it is being terminated
I0905 20:49:12.012667       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9103/default), service account deleted, removing tokens
I0905 20:49:12.012934       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9103, name default, uid fdd4b43e-6058-4eab-a17c-24f56ddbf4e4, event type delete
I0905 20:49:12.012970       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (2.901µs)
I0905 20:49:12.025659       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9103" (2µs)
I0905 20:49:12.027723       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9103, estimate: 0, errors: <nil>
I0905 20:49:12.035535       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-9103" (182.154532ms)
I0905 20:49:13.081231       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="74.305µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41294" resp=200
2022/09/05 20:49:13 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1253.593 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-nwqc7, container manager
STEP: Dumping workload cluster default/capz-06vmzc logs
Sep  5 20:50:42.599: INFO: Collecting logs for Linux node capz-06vmzc-control-plane-tgzc4 in cluster capz-06vmzc in namespace default

Sep  5 20:51:42.600: INFO: Collecting boot logs for AzureMachine capz-06vmzc-control-plane-tgzc4

Failed to get logs for machine capz-06vmzc-control-plane-qj4k8, cluster default/capz-06vmzc: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  5 20:51:43.984: INFO: Collecting logs for Linux node capz-06vmzc-md-0-bv5pc in cluster capz-06vmzc in namespace default

Sep  5 20:52:43.986: INFO: Collecting boot logs for AzureMachine capz-06vmzc-md-0-bv5pc

Failed to get logs for machine capz-06vmzc-md-0-54d49cf976-r2zhn, cluster default/capz-06vmzc: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  5 20:52:44.495: INFO: Collecting logs for Linux node capz-06vmzc-md-0-dbrp2 in cluster capz-06vmzc in namespace default

Sep  5 20:53:44.497: INFO: Collecting boot logs for AzureMachine capz-06vmzc-md-0-dbrp2

Failed to get logs for machine capz-06vmzc-md-0-54d49cf976-xc8c7, cluster default/capz-06vmzc: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-06vmzc kube-system pod logs
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-fwvmw, container calico-kube-controllers
STEP: Fetching kube-system pod logs took 1.189647866s
STEP: Dumping workload cluster default/capz-06vmzc Azure activity log
STEP: Collecting events for Pod kube-system/etcd-capz-06vmzc-control-plane-tgzc4
STEP: Creating log watcher for controller kube-system/etcd-capz-06vmzc-control-plane-tgzc4, container etcd
STEP: Creating log watcher for controller kube-system/kube-proxy-9nrjg, container kube-proxy
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-06vmzc-control-plane-tgzc4, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-06vmzc-control-plane-tgzc4
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-06vmzc-control-plane-tgzc4, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-06vmzc-control-plane-tgzc4
STEP: Creating log watcher for controller kube-system/kube-proxy-75hqg, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-capz-06vmzc-control-plane-tgzc4"
STEP: Collecting events for Pod kube-system/kube-proxy-75hqg
STEP: Collecting events for Pod kube-system/kube-proxy-mcstb
STEP: Collecting events for Pod kube-system/kube-proxy-9nrjg
STEP: Creating log watcher for controller kube-system/kube-proxy-mcstb, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-06vmzc-control-plane-tgzc4
STEP: failed to find events of Pod "kube-apiserver-capz-06vmzc-control-plane-tgzc4"
STEP: failed to find events of Pod "kube-scheduler-capz-06vmzc-control-plane-tgzc4"
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-06vmzc-control-plane-tgzc4, container kube-scheduler
STEP: Creating log watcher for controller kube-system/metrics-server-8c95fb79b-ffn28, container metrics-server
STEP: failed to find events of Pod "etcd-capz-06vmzc-control-plane-tgzc4"
STEP: Collecting events for Pod kube-system/calico-node-dqjfg
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-fwvmw
STEP: Creating log watcher for controller kube-system/calico-node-wmn25, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-w8wvr, container calico-node
STEP: Creating log watcher for controller kube-system/calico-node-dqjfg, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-wmn25
... skipping 24 lines ...