This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-07 20:12
Elapsed46m5s
Revision
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 630 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 137 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-yx2tsa-kubeconfig; do sleep 1; done"
capz-yx2tsa-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-yx2tsa-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-yx2tsa-control-plane-jxjcc   NotReady   control-plane,master   1s    v1.22.14-rc.0.5+710e88673218ed
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-yx2tsa-control-plane-jxjcc condition met
node/capz-yx2tsa-md-0-dtt5p condition met
... skipping 100 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:164
STEP: Creating a kubernetes client
Sep  7 20:27:05.607: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.332 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:164

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:27:07.808: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-fq7pp" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Sep  7 20:27:07.843: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 34.782596ms
Sep  7 20:27:09.881: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072816917s
Sep  7 20:27:11.919: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11066535s
Sep  7 20:27:13.956: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147602367s
Sep  7 20:27:15.993: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184454933s
Sep  7 20:27:18.029: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.221201584s
... skipping 4 lines ...
Sep  7 20:27:28.214: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 20.405481448s
Sep  7 20:27:30.252: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Pending", Reason="", readiness=false. Elapsed: 22.443950285s
Sep  7 20:27:32.290: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Running", Reason="", readiness=true. Elapsed: 24.481601393s
Sep  7 20:27:34.327: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Running", Reason="", readiness=false. Elapsed: 26.519098963s
Sep  7 20:27:36.367: INFO: Pod "azuredisk-volume-tester-fq7pp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.559167629s
STEP: Saw pod success
Sep  7 20:27:36.367: INFO: Pod "azuredisk-volume-tester-fq7pp" satisfied condition "Succeeded or Failed"
Sep  7 20:27:36.367: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-fq7pp"
Sep  7 20:27:36.419: INFO: Pod azuredisk-volume-tester-fq7pp has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-fq7pp in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:27:36.535: INFO: deleting PVC "azuredisk-1353"/"pvc-8rm8b"
Sep  7 20:27:36.535: INFO: Deleting PersistentVolumeClaim "pvc-8rm8b"
STEP: waiting for claim's PV "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" to be deleted
Sep  7 20:27:36.573: INFO: Waiting up to 10m0s for PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a to get deleted
Sep  7 20:27:36.609: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a found and phase=Bound (35.858867ms)
Sep  7 20:27:41.650: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a found and phase=Failed (5.076687335s)
Sep  7 20:27:46.688: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a found and phase=Failed (10.115145905s)
Sep  7 20:27:51.724: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a found and phase=Failed (15.150740915s)
Sep  7 20:27:56.763: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a found and phase=Failed (20.189858645s)
Sep  7 20:28:01.801: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a found and phase=Failed (25.227723435s)
Sep  7 20:28:06.840: INFO: PersistentVolume pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a was removed
Sep  7 20:28:06.841: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  7 20:28:06.876: INFO: Claim "azuredisk-1353" in namespace "pvc-8rm8b" doesn't exist in the system
Sep  7 20:28:06.876: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-qvh5r
Sep  7 20:28:06.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  7 20:28:22.952: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-k5zwg"
Sep  7 20:28:22.999: INFO: Error getting logs for pod azuredisk-volume-tester-k5zwg: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-k5zwg)
STEP: Deleting pod azuredisk-volume-tester-k5zwg in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:28:23.112: INFO: deleting PVC "azuredisk-1563"/"pvc-zl6mb"
Sep  7 20:28:23.112: INFO: Deleting PersistentVolumeClaim "pvc-zl6mb"
STEP: waiting for claim's PV "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" to be deleted
... skipping 18 lines ...
Sep  7 20:29:48.843: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Bound (1m25.690557984s)
Sep  7 20:29:53.884: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Bound (1m30.730688595s)
Sep  7 20:29:58.919: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Bound (1m35.766435561s)
Sep  7 20:30:03.959: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Bound (1m40.806153306s)
Sep  7 20:30:08.997: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Bound (1m45.844578794s)
Sep  7 20:30:14.036: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Bound (1m50.883155838s)
Sep  7 20:30:19.075: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Failed (1m55.92224565s)
Sep  7 20:30:24.112: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Failed (2m0.959322479s)
Sep  7 20:30:29.148: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Failed (2m5.995268881s)
Sep  7 20:30:34.188: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Failed (2m11.035158916s)
Sep  7 20:30:39.228: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Failed (2m16.074867564s)
Sep  7 20:30:44.265: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e found and phase=Failed (2m21.111717394s)
Sep  7 20:30:49.300: INFO: PersistentVolume pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e was removed
Sep  7 20:30:49.300: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Sep  7 20:30:49.336: INFO: Claim "azuredisk-1563" in namespace "pvc-zl6mb" doesn't exist in the system
Sep  7 20:30:49.336: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-mcfzw
Sep  7 20:30:49.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:30:50.091: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-4pq2l" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Sep  7 20:30:50.126: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 34.904366ms
Sep  7 20:30:52.163: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071997278s
Sep  7 20:30:54.200: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 4.1088506s
Sep  7 20:30:56.238: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146877696s
Sep  7 20:30:58.276: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184135353s
Sep  7 20:31:00.311: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219951862s
Sep  7 20:31:02.348: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 12.256351681s
Sep  7 20:31:04.385: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 14.293363679s
Sep  7 20:31:06.424: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 16.332143208s
Sep  7 20:31:08.462: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Pending", Reason="", readiness=false. Elapsed: 18.37030515s
Sep  7 20:31:10.502: INFO: Pod "azuredisk-volume-tester-4pq2l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.410340524s
STEP: Saw pod success
Sep  7 20:31:10.502: INFO: Pod "azuredisk-volume-tester-4pq2l" satisfied condition "Succeeded or Failed"
Sep  7 20:31:10.502: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-4pq2l"
Sep  7 20:31:10.552: INFO: Pod azuredisk-volume-tester-4pq2l has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-4pq2l in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:31:10.678: INFO: deleting PVC "azuredisk-7463"/"pvc-48xng"
Sep  7 20:31:10.678: INFO: Deleting PersistentVolumeClaim "pvc-48xng"
STEP: waiting for claim's PV "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" to be deleted
Sep  7 20:31:10.715: INFO: Waiting up to 10m0s for PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b to get deleted
Sep  7 20:31:10.752: INFO: PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b found and phase=Released (37.008542ms)
Sep  7 20:31:15.792: INFO: PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b found and phase=Failed (5.076850118s)
Sep  7 20:31:20.830: INFO: PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b found and phase=Failed (10.114897364s)
Sep  7 20:31:25.870: INFO: PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b found and phase=Failed (15.155541642s)
Sep  7 20:31:30.910: INFO: PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b found and phase=Failed (20.195263225s)
Sep  7 20:31:35.950: INFO: PersistentVolume pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b was removed
Sep  7 20:31:35.950: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7463 to be removed
Sep  7 20:31:35.985: INFO: Claim "azuredisk-7463" in namespace "pvc-48xng" doesn't exist in the system
Sep  7 20:31:35.985: INFO: deleting StorageClass azuredisk-7463-kubernetes.io-azure-disk-dynamic-sc-fhp79
Sep  7 20:31:36.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7463" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  7 20:31:36.727: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t242k" in namespace "azuredisk-9241" to be "Error status code"
Sep  7 20:31:36.761: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 34.892387ms
Sep  7 20:31:38.799: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072561033s
Sep  7 20:31:40.835: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108568834s
Sep  7 20:31:42.871: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144775469s
Sep  7 20:31:44.908: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.181315867s
Sep  7 20:31:46.945: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.218135958s
Sep  7 20:31:48.981: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.253943777s
Sep  7 20:31:51.018: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.291857217s
Sep  7 20:31:53.058: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Pending", Reason="", readiness=false. Elapsed: 16.33178471s
Sep  7 20:31:55.096: INFO: Pod "azuredisk-volume-tester-t242k": Phase="Failed", Reason="", readiness=false. Elapsed: 18.369395437s
STEP: Saw pod failure
Sep  7 20:31:55.096: INFO: Pod "azuredisk-volume-tester-t242k" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 20:31:55.143: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-t242k"
Sep  7 20:31:55.182: INFO: Pod azuredisk-volume-tester-t242k has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-t242k in namespace azuredisk-9241
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:31:55.300: INFO: deleting PVC "azuredisk-9241"/"pvc-52grh"
Sep  7 20:31:55.300: INFO: Deleting PersistentVolumeClaim "pvc-52grh"
STEP: waiting for claim's PV "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" to be deleted
Sep  7 20:31:55.337: INFO: Waiting up to 10m0s for PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 to get deleted
Sep  7 20:31:55.374: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Released (36.488423ms)
Sep  7 20:32:00.414: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (5.076832533s)
Sep  7 20:32:05.451: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (10.113284809s)
Sep  7 20:32:10.493: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (15.155715755s)
Sep  7 20:32:15.532: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (20.194951738s)
Sep  7 20:32:20.569: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (25.231384735s)
Sep  7 20:32:25.608: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (30.27066571s)
Sep  7 20:32:30.644: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 found and phase=Failed (35.306049988s)
Sep  7 20:32:35.684: INFO: PersistentVolume pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 was removed
Sep  7 20:32:35.684: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  7 20:32:35.719: INFO: Claim "azuredisk-9241" in namespace "pvc-52grh" doesn't exist in the system
Sep  7 20:32:35.719: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-qpcqf
Sep  7 20:32:35.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9241" for this suite.
... skipping 53 lines ...
Sep  7 20:33:30.193: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Bound (5.070630147s)
Sep  7 20:33:35.233: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Bound (10.110740068s)
Sep  7 20:33:40.270: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Bound (15.147054965s)
Sep  7 20:33:45.307: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Bound (20.184693431s)
Sep  7 20:33:50.344: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Bound (25.221007215s)
Sep  7 20:33:55.383: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Bound (30.260812346s)
Sep  7 20:34:00.423: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (35.300321256s)
Sep  7 20:34:05.463: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (40.340141451s)
Sep  7 20:34:10.500: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (45.377063514s)
Sep  7 20:34:15.537: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (50.414220844s)
Sep  7 20:34:20.574: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (55.451052978s)
Sep  7 20:34:25.613: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (1m0.490754998s)
Sep  7 20:34:30.653: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 found and phase=Failed (1m5.530325987s)
Sep  7 20:34:35.689: INFO: PersistentVolume pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 was removed
Sep  7 20:34:35.689: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  7 20:34:35.724: INFO: Claim "azuredisk-9336" in namespace "pvc-9wjbk" doesn't exist in the system
Sep  7 20:34:35.724: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-rvlzx
Sep  7 20:34:35.761: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-gzqcc"
Sep  7 20:34:35.812: INFO: Pod azuredisk-volume-tester-gzqcc has the following logs: 
... skipping 8 lines ...
Sep  7 20:34:41.029: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Bound (5.071051259s)
Sep  7 20:34:46.065: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Bound (10.107509474s)
Sep  7 20:34:51.105: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Bound (15.146875318s)
Sep  7 20:34:56.141: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Bound (20.182905721s)
Sep  7 20:35:01.176: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Bound (25.218556847s)
Sep  7 20:35:06.212: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Bound (30.254454168s)
Sep  7 20:35:11.252: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Failed (35.294470447s)
Sep  7 20:35:16.292: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Failed (40.334252822s)
Sep  7 20:35:21.328: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Failed (45.370635184s)
Sep  7 20:35:26.368: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Failed (50.410490058s)
Sep  7 20:35:31.408: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 found and phase=Failed (55.45030819s)
Sep  7 20:35:36.447: INFO: PersistentVolume pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 was removed
Sep  7 20:35:36.447: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  7 20:35:36.482: INFO: Claim "azuredisk-9336" in namespace "pvc-dqxfl" doesn't exist in the system
Sep  7 20:35:36.482: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-6pxmp
Sep  7 20:35:36.519: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-pqt9h"
Sep  7 20:35:36.566: INFO: Pod azuredisk-volume-tester-pqt9h has the following logs: 
... skipping 8 lines ...
Sep  7 20:35:41.784: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Bound (5.07191173s)
Sep  7 20:35:46.819: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Bound (10.107757567s)
Sep  7 20:35:51.860: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Bound (15.147974145s)
Sep  7 20:35:56.899: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Bound (20.187409238s)
Sep  7 20:36:01.936: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Bound (25.22436891s)
Sep  7 20:36:06.973: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Bound (30.261014999s)
Sep  7 20:36:12.012: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Failed (35.300309009s)
Sep  7 20:36:17.053: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Failed (40.341380679s)
Sep  7 20:36:22.093: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Failed (45.381443115s)
Sep  7 20:36:27.133: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Failed (50.421434173s)
Sep  7 20:36:32.173: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 found and phase=Failed (55.461191159s)
Sep  7 20:36:37.213: INFO: PersistentVolume pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 was removed
Sep  7 20:36:37.213: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  7 20:36:37.248: INFO: Claim "azuredisk-9336" in namespace "pvc-k94wm" doesn't exist in the system
Sep  7 20:36:37.248: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-md4pf
Sep  7 20:36:37.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9336" for this suite.
... skipping 59 lines ...
Sep  7 20:38:04.973: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Bound (5.074597646s)
Sep  7 20:38:10.012: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Bound (10.114010171s)
Sep  7 20:38:15.055: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Bound (15.156344549s)
Sep  7 20:38:20.092: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Bound (20.194033995s)
Sep  7 20:38:25.129: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Bound (25.230947846s)
Sep  7 20:38:30.170: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Bound (30.271404022s)
Sep  7 20:38:35.206: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Failed (35.307845716s)
Sep  7 20:38:40.245: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Failed (40.346644486s)
Sep  7 20:38:45.283: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Failed (45.385166144s)
Sep  7 20:38:50.323: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Failed (50.424548757s)
Sep  7 20:38:55.363: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Failed (55.464570708s)
Sep  7 20:39:00.400: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df found and phase=Failed (1m0.5022803s)
Sep  7 20:39:05.439: INFO: PersistentVolume pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df was removed
Sep  7 20:39:05.439: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2205 to be removed
Sep  7 20:39:05.474: INFO: Claim "azuredisk-2205" in namespace "pvc-cfclr" doesn't exist in the system
Sep  7 20:39:05.474: INFO: deleting StorageClass azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-nqwbh
Sep  7 20:39:05.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2205" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:39:23.068: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-4tw48" in namespace "azuredisk-1387" to be "Succeeded or Failed"
Sep  7 20:39:23.108: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 40.23152ms
Sep  7 20:39:25.142: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073703395s
Sep  7 20:39:27.176: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107485721s
Sep  7 20:39:29.210: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142373946s
Sep  7 20:39:31.246: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177608469s
Sep  7 20:39:33.281: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 10.213319023s
... skipping 8 lines ...
Sep  7 20:39:51.604: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 28.535610473s
Sep  7 20:39:53.639: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Pending", Reason="", readiness=false. Elapsed: 30.570743505s
Sep  7 20:39:55.673: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Running", Reason="", readiness=true. Elapsed: 32.605118233s
Sep  7 20:39:57.709: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Running", Reason="", readiness=false. Elapsed: 34.640531474s
Sep  7 20:39:59.744: INFO: Pod "azuredisk-volume-tester-4tw48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.676313157s
STEP: Saw pod success
Sep  7 20:39:59.744: INFO: Pod "azuredisk-volume-tester-4tw48" satisfied condition "Succeeded or Failed"
Sep  7 20:39:59.744: INFO: deleting Pod "azuredisk-1387"/"azuredisk-volume-tester-4tw48"
Sep  7 20:39:59.795: INFO: Pod azuredisk-volume-tester-4tw48 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-4tw48 in namespace azuredisk-1387
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:39:59.906: INFO: deleting PVC "azuredisk-1387"/"pvc-ltwpj"
Sep  7 20:39:59.906: INFO: Deleting PersistentVolumeClaim "pvc-ltwpj"
STEP: waiting for claim's PV "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" to be deleted
Sep  7 20:39:59.939: INFO: Waiting up to 10m0s for PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa to get deleted
Sep  7 20:39:59.971: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Released (32.00315ms)
Sep  7 20:40:05.005: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (5.065295421s)
Sep  7 20:40:10.042: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (10.102785243s)
Sep  7 20:40:15.080: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (15.14031275s)
Sep  7 20:40:20.116: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (20.176970793s)
Sep  7 20:40:25.152: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (25.213080787s)
Sep  7 20:40:30.188: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (30.249053528s)
Sep  7 20:40:35.225: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (35.285453727s)
Sep  7 20:40:40.258: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (40.318544944s)
Sep  7 20:40:45.292: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (45.352767497s)
Sep  7 20:40:50.327: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (50.387431399s)
Sep  7 20:40:55.361: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (55.421565195s)
Sep  7 20:41:00.396: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa found and phase=Failed (1m0.456401189s)
Sep  7 20:41:05.432: INFO: PersistentVolume pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa was removed
Sep  7 20:41:05.432: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  7 20:41:05.465: INFO: Claim "azuredisk-1387" in namespace "pvc-ltwpj" doesn't exist in the system
Sep  7 20:41:05.465: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-mn5tp
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:41:26.799: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-t6fzl" in namespace "azuredisk-4547" to be "Succeeded or Failed"
Sep  7 20:41:26.831: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 32.558933ms
Sep  7 20:41:28.865: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066575397s
Sep  7 20:41:30.901: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102514503s
Sep  7 20:41:32.936: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137067351s
Sep  7 20:41:34.968: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.169759759s
Sep  7 20:41:37.001: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202688437s
... skipping 7 lines ...
Sep  7 20:41:53.271: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 26.472104737s
Sep  7 20:41:55.305: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 28.506047644s
Sep  7 20:41:57.340: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Pending", Reason="", readiness=false. Elapsed: 30.541033469s
Sep  7 20:41:59.374: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Running", Reason="", readiness=false. Elapsed: 32.575837904s
Sep  7 20:42:01.409: INFO: Pod "azuredisk-volume-tester-t6fzl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.610092379s
STEP: Saw pod success
Sep  7 20:42:01.409: INFO: Pod "azuredisk-volume-tester-t6fzl" satisfied condition "Succeeded or Failed"
Sep  7 20:42:01.409: INFO: deleting Pod "azuredisk-4547"/"azuredisk-volume-tester-t6fzl"
Sep  7 20:42:01.459: INFO: Pod azuredisk-volume-tester-t6fzl has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.058628 seconds, 1.7GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  7 20:42:01.581: INFO: deleting PVC "azuredisk-4547"/"pvc-67psk"
Sep  7 20:42:01.581: INFO: Deleting PersistentVolumeClaim "pvc-67psk"
STEP: waiting for claim's PV "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" to be deleted
Sep  7 20:42:01.615: INFO: Waiting up to 10m0s for PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b to get deleted
Sep  7 20:42:01.647: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Released (31.455628ms)
Sep  7 20:42:06.679: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Failed (5.064274415s)
Sep  7 20:42:11.716: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Failed (10.100370407s)
Sep  7 20:42:16.752: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Failed (15.137210808s)
Sep  7 20:42:21.790: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Failed (20.174672202s)
Sep  7 20:42:26.822: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Failed (25.207069756s)
Sep  7 20:42:31.860: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b found and phase=Failed (30.244813168s)
Sep  7 20:42:36.897: INFO: PersistentVolume pvc-39ce5944-6007-4bc5-8930-aa384aefb01b was removed
Sep  7 20:42:36.897: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-4547 to be removed
Sep  7 20:42:36.930: INFO: Claim "azuredisk-4547" in namespace "pvc-67psk" doesn't exist in the system
Sep  7 20:42:36.930: INFO: deleting StorageClass azuredisk-4547-kubernetes.io-azure-disk-dynamic-sc-q4dzs
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:42:49.210: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ghsgd" in namespace "azuredisk-7578" to be "Succeeded or Failed"
Sep  7 20:42:49.242: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.251902ms
Sep  7 20:42:51.274: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064777816s
Sep  7 20:42:53.308: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098814679s
Sep  7 20:42:55.343: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133804031s
Sep  7 20:42:57.380: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170350371s
Sep  7 20:42:59.415: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205578093s
... skipping 8 lines ...
Sep  7 20:43:17.729: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.5192039s
Sep  7 20:43:19.765: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 30.55568085s
Sep  7 20:43:21.801: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.591290108s
Sep  7 20:43:23.836: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.626417569s
Sep  7 20:43:25.872: INFO: Pod "azuredisk-volume-tester-ghsgd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 36.66187356s
STEP: Saw pod success
Sep  7 20:43:25.872: INFO: Pod "azuredisk-volume-tester-ghsgd" satisfied condition "Succeeded or Failed"
Sep  7 20:43:25.872: INFO: deleting Pod "azuredisk-7578"/"azuredisk-volume-tester-ghsgd"
Sep  7 20:43:25.906: INFO: Pod azuredisk-volume-tester-ghsgd has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-ghsgd in namespace azuredisk-7578
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:43:26.012: INFO: deleting PVC "azuredisk-7578"/"pvc-4jc76"
Sep  7 20:43:26.012: INFO: Deleting PersistentVolumeClaim "pvc-4jc76"
STEP: waiting for claim's PV "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" to be deleted
Sep  7 20:43:26.046: INFO: Waiting up to 10m0s for PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e to get deleted
Sep  7 20:43:26.078: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Released (31.561077ms)
Sep  7 20:43:31.113: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (5.066890737s)
Sep  7 20:43:36.149: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (10.103086755s)
Sep  7 20:43:41.184: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (15.138542581s)
Sep  7 20:43:46.222: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (20.175793339s)
Sep  7 20:43:51.258: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (25.212433802s)
Sep  7 20:43:56.292: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (30.246296517s)
Sep  7 20:44:01.331: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (35.284610857s)
Sep  7 20:44:06.363: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (40.317225253s)
Sep  7 20:44:11.398: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (45.351686823s)
Sep  7 20:44:16.433: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (50.387229526s)
Sep  7 20:44:21.470: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (55.424029095s)
Sep  7 20:44:26.506: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (1m0.459915856s)
Sep  7 20:44:31.542: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e found and phase=Failed (1m5.495951781s)
Sep  7 20:44:36.574: INFO: PersistentVolume pvc-f6de6d37-7bc7-499a-8a04-f794058e238e was removed
Sep  7 20:44:36.574: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  7 20:44:36.606: INFO: Claim "azuredisk-7578" in namespace "pvc-4jc76" doesn't exist in the system
Sep  7 20:44:36.606: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-pcg9x
STEP: validating provisioned PV
STEP: checking the PV
... skipping 509 lines ...
I0907 20:23:02.710890       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-09-07 20:15:48 +0000 UTC to 2032-09-04 20:20:48 +0000 UTC (now=2022-09-07 20:23:02.708569904 +0000 UTC))"
I0907 20:23:02.711439       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662582181\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662582180\" (2022-09-07 19:22:59 +0000 UTC to 2023-09-07 19:22:59 +0000 UTC (now=2022-09-07 20:23:02.711406274 +0000 UTC))"
I0907 20:23:02.711988       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662582182\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662582182\" (2022-09-07 19:23:01 +0000 UTC to 2023-09-07 19:23:01 +0000 UTC (now=2022-09-07 20:23:02.711951364 +0000 UTC))"
I0907 20:23:02.712162       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0907 20:23:02.715602       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0907 20:23:02.715925       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0907 20:23:04.729239       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0907 20:23:04.729305       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0907 20:23:08.017309       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0907 20:23:08.017691       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-yx2tsa-control-plane-jxjcc_5db00c65-dfbf-494f-ae4a-9e6f0a9bf867 became leader"
W0907 20:23:08.064613       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0907 20:23:08.065423       1 azure_auth.go:232] Using AzurePublicCloud environment
I0907 20:23:08.065492       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0907 20:23:08.065563       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0907 20:23:08.068463       1 reflector.go:219] Starting reflector *v1.Node (12h18m46.021296396s) from k8s.io/client-go/informers/factory.go:134
I0907 20:23:08.068476       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0907 20:23:08.068802       1 reflector.go:219] Starting reflector *v1.ServiceAccount (12h18m46.021296396s) from k8s.io/client-go/informers/factory.go:134
I0907 20:23:08.068814       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0907 20:23:08.069101       1 reflector.go:219] Starting reflector *v1.Secret (12h18m46.021296396s) from k8s.io/client-go/informers/factory.go:134
I0907 20:23:08.069114       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0907 20:23:08.129192       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0907 20:23:08.130312       1 controllermanager.go:562] Starting "service"
I0907 20:23:08.154678       1 controller.go:272] Triggering nodeSync
I0907 20:23:08.154711       1 controllermanager.go:577] Started "service"
I0907 20:23:08.154724       1 controllermanager.go:562] Starting "attachdetach"
I0907 20:23:08.154776       1 controller.go:233] Starting service controller
I0907 20:23:08.154784       1 shared_informer.go:240] Waiting for caches to sync for service
... skipping 6 lines ...
I0907 20:23:08.203730       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0907 20:23:08.203756       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 20:23:08.203778       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0907 20:23:08.203793       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0907 20:23:08.203814       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0907 20:23:08.203831       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0907 20:23:08.203885       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 20:23:08.203905       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 20:23:08.216706       1 controllermanager.go:577] Started "attachdetach"
I0907 20:23:08.216735       1 controllermanager.go:562] Starting "endpointslice"
I0907 20:23:08.216900       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-control-plane-jxjcc"
W0907 20:23:08.216929       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-yx2tsa-control-plane-jxjcc" does not exist
I0907 20:23:08.216948       1 attach_detach_controller.go:328] Starting attach detach controller
I0907 20:23:08.216957       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0907 20:23:08.254899       1 controllermanager.go:577] Started "endpointslice"
I0907 20:23:08.255134       1 controllermanager.go:562] Starting "resourcequota"
I0907 20:23:08.255402       1 endpointslice_controller.go:257] Starting endpoint slice controller
I0907 20:23:08.255657       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
... skipping 151 lines ...
I0907 20:23:10.328953       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0907 20:23:10.328969       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0907 20:23:10.328991       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0907 20:23:10.329051       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 20:23:10.329166       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0907 20:23:10.329201       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0907 20:23:10.329255       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 20:23:10.329275       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 20:23:10.329820       1 controllermanager.go:577] Started "persistentvolume-binder"
I0907 20:23:10.329846       1 controllermanager.go:562] Starting "route"
I0907 20:23:10.329856       1 core.go:241] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0907 20:23:10.329864       1 controllermanager.go:569] Skipping "route"
I0907 20:23:10.329875       1 controllermanager.go:562] Starting "pvc-protection"
... skipping 64 lines ...
I0907 20:23:12.180710       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0907 20:23:12.180726       1 graph_builder.go:273] garbage controller monitor not synced: no monitors
I0907 20:23:12.182372       1 graph_builder.go:289] GraphBuilder running
I0907 20:23:12.268885       1 request.go:597] Waited for 90.60409ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector
I0907 20:23:12.318642       1 request.go:597] Waited for 97.002028ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system
I0907 20:23:12.368717       1 request.go:597] Waited for 97.639062ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector/token
W0907 20:23:12.399656       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 20:23:12.400335       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [/v1, Resource=configmaps /v1, Resource=endpoints /v1, Resource=events /v1, Resource=limitranges /v1, Resource=namespaces /v1, Resource=nodes /v1, Resource=persistentvolumeclaims /v1, Resource=persistentvolumes /v1, Resource=pods /v1, Resource=podtemplates /v1, Resource=replicationcontrollers /v1, Resource=resourcequotas /v1, Resource=secrets /v1, Resource=serviceaccounts /v1, Resource=services admissionregistration.k8s.io/v1, Resource=mutatingwebhookconfigurations admissionregistration.k8s.io/v1, Resource=validatingwebhookconfigurations apiextensions.k8s.io/v1, Resource=customresourcedefinitions apiregistration.k8s.io/v1, Resource=apiservices apps/v1, Resource=controllerrevisions apps/v1, Resource=daemonsets apps/v1, Resource=deployments apps/v1, Resource=replicasets apps/v1, Resource=statefulsets autoscaling/v1, Resource=horizontalpodautoscalers batch/v1, Resource=cronjobs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta1, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta1, Resource=prioritylevelconfigurations networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=storageclasses storage.k8s.io/v1, Resource=volumeattachments storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []
I0907 20:23:12.400656       1 garbagecollector.go:219] reset restmapper
I0907 20:23:12.418299       1 request.go:597] Waited for 97.319445ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts
I0907 20:23:12.422700       1 controllermanager.go:577] Started "daemonset"
I0907 20:23:12.422729       1 controllermanager.go:562] Starting "csrcleaner"
I0907 20:23:12.422768       1 daemon_controller.go:284] Starting daemon sets controller
... skipping 369 lines ...
I0907 20:23:12.925833       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-07 20:23:12.89766963 +0000 UTC m=+13.098628280 - now: 2022-09-07 20:23:12.925824881 +0000 UTC m=+13.126783631]
I0907 20:23:12.927251       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 20:23:12.936964       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/metrics-server"
I0907 20:23:12.929040       1 request.go:597] Waited for 285.284549ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?limit=500&resourceVersion=0
I0907 20:23:12.929245       1 request.go:597] Waited for 288.169164ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/node-controller
I0907 20:23:12.944810       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="359.295895ms"
I0907 20:23:12.957784       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:23:12.958001       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 20:23:12.957824386 +0000 UTC m=+13.158783036"
I0907 20:23:12.958857       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-07 20:23:12 +0000 UTC - now: 2022-09-07 20:23:12.958848933 +0000 UTC m=+13.159807583]
I0907 20:23:12.962216       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamhandles
I0907 20:23:12.962779       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="384.454561ms"
I0907 20:23:12.962934       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:23:12.963080       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:23:12.963035436 +0000 UTC m=+13.163994186"
I0907 20:23:12.964339       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 20:23:12 +0000 UTC - now: 2022-09-07 20:23:12.964331422 +0000 UTC m=+13.165290072]
I0907 20:23:12.965253       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="380.105657ms"
I0907 20:23:12.965290       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:23:12.971371       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-07 20:23:12.971230515 +0000 UTC m=+13.172189165"
I0907 20:23:12.972462       1 deployment_util.go:808] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-07 20:23:12 +0000 UTC - now: 2022-09-07 20:23:12.972454591 +0000 UTC m=+13.173413241]
I0907 20:23:12.968681       1 request.go:597] Waited for 310.494376ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/endpointslice-controller
I0907 20:23:12.980007       1 request.go:597] Waited for 335.640695ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/policy/v1beta1/podsecuritypolicies?limit=500&resourceVersion=0
I0907 20:23:12.984664       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
I0907 20:23:12.997129       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
... skipping 605 lines ...
I0907 20:23:44.281938       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-vmsfz"
I0907 20:23:44.281992       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-vmsfz, PodDisruptionBudget controller will avoid syncing.
I0907 20:23:44.282001       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-vmsfz"
I0907 20:23:44.282125       1 replica_set.go:443] Pod coredns-78fcd69978-vmsfz updated, objectMeta {Name:coredns-78fcd69978-vmsfz GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:6cdd9522-e0b0-44cb-8d55-5ba73d84af14 ResourceVersion:651 Generation:0 CreationTimestamp:2022-09-07 20:23:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:3318d544-60d8-4fe0-ae36-9c67bcf6f935 Controller:0xc000bcba87 BlockOwnerDeletion:0xc000bcba88}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3318d544-60d8-4fe0-ae36-9c67bcf6f935\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:23:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-78fcd69978-vmsfz GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:6cdd9522-e0b0-44cb-8d55-5ba73d84af14 ResourceVersion:658 Generation:0 CreationTimestamp:2022-09-07 20:23:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:3318d544-60d8-4fe0-ae36-9c67bcf6f935 Controller:0xc0019ce79f BlockOwnerDeletion:0xc0019ce7c0}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3318d544-60d8-4fe0-ae36-9c67bcf6f935\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:23:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:23:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0907 20:23:44.282631       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0be5d4c35cb7709, ext:13103485479, loc:(*time.Location)(0x751a1a0)}}
I0907 20:23:44.282974       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (346.028µs)
W0907 20:23:44.295644       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 20:23:44.718703       1 replica_set.go:443] Pod calico-kube-controllers-969cf87c4-zdc5x updated, objectMeta {Name:calico-kube-controllers-969cf87c4-zdc5x GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:ce016c14-a07d-421c-9695-8b83479ef531 ResourceVersion:652 Generation:0 CreationTimestamp:2022-09-07 20:23:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:2564c067-c2ff-45f8-b326-c9543894815e Controller:0xc000bcbf70 BlockOwnerDeletion:0xc000bcbf71}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2564c067-c2ff-45f8-b326-c9543894815e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:23:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:calico-kube-controllers-969cf87c4-zdc5x GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:ce016c14-a07d-421c-9695-8b83479ef531 ResourceVersion:662 Generation:0 CreationTimestamp:2022-09-07 20:23:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[cni.projectcalico.org/containerID:2de54cd3fcf0ebafc5bf4c85ea8bdf006be5823d79700b9f1fc43d725efc1f5f cni.projectcalico.org/podIP:192.168.36.1/32 cni.projectcalico.org/podIPs:192.168.36.1/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:2564c067-c2ff-45f8-b326-c9543894815e Controller:0xc00120d850 BlockOwnerDeletion:0xc00120d851}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2564c067-c2ff-45f8-b326-c9543894815e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 20:23:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:23:44 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0907 20:23:44.718953       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0be5d4c3511ee5b, ext:13091326229, loc:(*time.Location)(0x751a1a0)}}
I0907 20:23:44.719068       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (121.51µs)
I0907 20:23:44.719139       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-zdc5x"
I0907 20:23:44.719177       1 disruption.go:433] updatePod "calico-kube-controllers-969cf87c4-zdc5x" -> PDB "calico-kube-controllers"
I0907 20:23:44.719228       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (30.503µs)
... skipping 80 lines ...
I0907 20:23:47.416321       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/coredns-78fcd69978"
I0907 20:23:47.416355       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:23:47.416334595 +0000 UTC m=+47.617293245"
I0907 20:23:47.431086       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="14.729891ms"
I0907 20:23:47.431292       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 20:23:47.431328       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:23:47.431306901 +0000 UTC m=+47.632265551"
I0907 20:23:47.432126       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="801.148µs"
I0907 20:23:47.645808       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-yx2tsa-control-plane-jxjcc transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:23:24 +0000 UTC,LastTransitionTime:2022-09-07 20:22:50 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:23:44 +0000 UTC,LastTransitionTime:2022-09-07 20:23:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:23:47.645978       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-control-plane-jxjcc ReadyCondition updated. Updating timestamp.
I0907 20:23:47.646048       1 node_lifecycle_controller.go:893] Node capz-yx2tsa-control-plane-jxjcc is healthy again, removing all taints
I0907 20:23:47.646088       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0907 20:23:48.227089       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (382.323µs)
I0907 20:23:48.768725       1 disruption.go:427] updatePod called on pod "calico-node-dzpmk"
I0907 20:23:48.768785       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-dzpmk, PodDisruptionBudget controller will avoid syncing.
... skipping 68 lines ...
I0907 20:24:12.625703       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:24:12.655384       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:24:12.844028       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="60.605µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49606" resp=200
E0907 20:24:12.874792       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0907 20:24:12.874971       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:24:14.210994       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-control-plane-jxjcc"
W0907 20:24:14.357418       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0907 20:24:17.652678       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-control-plane-jxjcc ReadyCondition updated. Updating timestamp.
I0907 20:24:22.804824       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-t736f1" (17.303µs)
I0907 20:24:22.843130       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="212.232µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55086" resp=200
I0907 20:24:23.207794       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-tud862" (22.104µs)
I0907 20:24:24.915121       1 replica_set.go:443] Pod metrics-server-8c95fb79b-slm5t updated, objectMeta {Name:metrics-server-8c95fb79b-slm5t GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:43cdc13b-05d8-4679-a03d-26864ce50757 ResourceVersion:724 Generation:0 CreationTimestamp:2022-09-07 20:23:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:79f52c833834f8328319ff28720b1b2bbe25e8ca5aefd4a1fe4dc521c5ac0e23 cni.projectcalico.org/podIP:192.168.36.2/32 cni.projectcalico.org/podIPs:192.168.36.2/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:f637920d-5248-4dff-b948-51b41ce37619 Controller:0xc002090c07 BlockOwnerDeletion:0xc002090c08}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f637920d-5248-4dff-b948-51b41ce37619\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:23:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 20:23:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:23:56 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.36.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:metrics-server-8c95fb79b-slm5t GenerateName:metrics-server-8c95fb79b- Namespace:kube-system SelfLink: UID:43cdc13b-05d8-4679-a03d-26864ce50757 ResourceVersion:765 Generation:0 CreationTimestamp:2022-09-07 20:23:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:metrics-server pod-template-hash:8c95fb79b] Annotations:map[cni.projectcalico.org/containerID:79f52c833834f8328319ff28720b1b2bbe25e8ca5aefd4a1fe4dc521c5ac0e23 cni.projectcalico.org/podIP:192.168.36.2/32 cni.projectcalico.org/podIPs:192.168.36.2/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:metrics-server-8c95fb79b UID:f637920d-5248-4dff-b948-51b41ce37619 Controller:0xc0025be720 BlockOwnerDeletion:0xc0025be721}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:23:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f637920d-5248-4dff-b948-51b41ce37619\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"metrics-server\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":4443,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:readOnlyRootFilesystem":{},"f:runAsNonRoot":{},"f:runAsUser":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/tmp\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"tmp-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:23:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 20:23:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:24:24 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.36.2\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0907 20:24:24.915315       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-8c95fb79b", timestamp:time.Time{wall:0xc0be5d4c356cf93a, ext:13097292888, loc:(*time.Location)(0x751a1a0)}}
... skipping 93 lines ...
I0907 20:24:48.136867       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 20:24:48.136876       1 controller.go:804] Finished updateLoadBalancerHosts
I0907 20:24:48.136882       1 controller.go:731] It took 1.7201e-05 seconds to finish nodeSyncInternal
I0907 20:24:48.137257       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-yx2tsa-md-0-dtt5p}
I0907 20:24:48.138002       1 taint_manager.go:440] "Updating known taints on node" node="capz-yx2tsa-md-0-dtt5p" taints=[]
I0907 20:24:48.137939       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
W0907 20:24:48.138482       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-yx2tsa-md-0-dtt5p" does not exist
I0907 20:24:48.139055       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d4e1967f6ce, ext:20627202540, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:48.139327       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d64084b3bc3, ext:108340106977, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:48.139367       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-yx2tsa-md-0-dtt5p], creating 1
I0907 20:24:48.141203       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5d552f1e6d51, ext:48991481867, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:48.143643       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5d64086c17dc, ext:108342260474, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:48.143696       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-yx2tsa-md-0-dtt5p], creating 1
... skipping 88 lines ...
I0907 20:24:50.344652       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:24:50.344919       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-yx2tsa-md-0-r4w9v}
I0907 20:24:50.345475       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d640be6c1ba, ext:108400631000, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:50.348040       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 20:24:50.348383       1 controller.go:804] Finished updateLoadBalancerHosts
I0907 20:24:50.348745       1 controller.go:731] It took 0.000709465 seconds to finish nodeSyncInternal
W0907 20:24:50.348058       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-yx2tsa-md-0-r4w9v" does not exist
I0907 20:24:50.348617       1 taint_manager.go:440] "Updating known taints on node" node="capz-yx2tsa-md-0-r4w9v" taints=[]
I0907 20:24:50.348711       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d6494c8d35f, ext:110549664281, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:50.349814       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-yx2tsa-md-0-r4w9v], creating 1
I0907 20:24:50.351089       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5d64809d06ab, ext:110211249609, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:50.351956       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5d6494fa520f, ext:110552908077, loc:(*time.Location)(0x751a1a0)}}
I0907 20:24:50.352143       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-yx2tsa-md-0-r4w9v], creating 1
... skipping 469 lines ...
I0907 20:25:21.528060       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5d6c5f797d31, ext:141729014351, loc:(*time.Location)(0x751a1a0)}}
I0907 20:25:21.528117       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0907 20:25:21.528167       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0907 20:25:21.528254       1 daemon_controller.go:1102] Updating daemon set status
I0907 20:25:21.528416       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.388822ms)
I0907 20:25:21.627804       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:25:22.663905       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-yx2tsa-md-0-dtt5p transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:24:58 +0000 UTC,LastTransitionTime:2022-09-07 20:24:48 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:25:18 +0000 UTC,LastTransitionTime:2022-09-07 20:25:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:25:22.663994       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-dtt5p ReadyCondition updated. Updating timestamp.
I0907 20:25:22.677949       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:25:22.678328       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-yx2tsa-md-0-dtt5p}
I0907 20:25:22.678523       1 taint_manager.go:440] "Updating known taints on node" node="capz-yx2tsa-md-0-dtt5p" taints=[]
I0907 20:25:22.678668       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-yx2tsa-md-0-dtt5p"
I0907 20:25:22.678609       1 node_lifecycle_controller.go:893] Node capz-yx2tsa-md-0-dtt5p is healthy again, removing all taints
I0907 20:25:22.679002       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-yx2tsa-md-0-r4w9v transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:25:00 +0000 UTC,LastTransitionTime:2022-09-07 20:24:50 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:25:20 +0000 UTC,LastTransitionTime:2022-09-07 20:25:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:25:22.679390       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-r4w9v ReadyCondition updated. Updating timestamp.
I0907 20:25:22.691559       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:25:22.692908       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-yx2tsa-md-0-r4w9v}
I0907 20:25:22.693304       1 taint_manager.go:440] "Updating known taints on node" node="capz-yx2tsa-md-0-r4w9v" taints=[]
I0907 20:25:22.693445       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-yx2tsa-md-0-r4w9v"
I0907 20:25:22.694418       1 node_lifecycle_controller.go:893] Node capz-yx2tsa-md-0-r4w9v is healthy again, removing all taints
... skipping 126 lines ...
I0907 20:27:07.800784       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-8rm8b" with version 1188
I0907 20:27:07.802860       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a StorageAccountType:Standard_LRS Size:10
I0907 20:27:09.923582       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8081
I0907 20:27:09.967603       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-qkk22, uid e47f3416-a607-463b-bf8b-34f5f93d538c, event type delete
I0907 20:27:09.992515       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8081, name kube-root-ca.crt, uid 184b7ee6-44ff-42ab-859f-71cee51f844a, event type delete
I0907 20:27:09.996140       1 publisher.go:186] Finished syncing namespace "azuredisk-8081" (2.998671ms)
E0907 20:27:09.997266       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-jkzfk" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0907 20:27:10.063202       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0907 20:27:10.063327       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8081, name default, uid 7f5e9161-ee88-45b3-95fb-f3ce2e3a2adc, event type delete
I0907 20:27:10.063418       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.101µs)
I0907 20:27:10.116748       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (3.8µs)
I0907 20:27:10.117945       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8081, estimate: 0, errors: <nil>
I0907 20:27:10.133881       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (217.521606ms)
... skipping 53 lines ...
I0907 20:27:10.285800       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1353/pvc-8rm8b] status: phase Bound already set
I0907 20:27:10.285821       1 pv_controller.go:1038] volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" bound to claim "azuredisk-1353/pvc-8rm8b"
I0907 20:27:10.285897       1 pv_controller.go:1039] volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" status after binding: phase: Bound, bound to: "azuredisk-1353/pvc-8rm8b (uid: 00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a)", boundByController: true
I0907 20:27:10.285922       1 pv_controller.go:1040] claim "azuredisk-1353/pvc-8rm8b" status after binding: phase: Bound, bound to: "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a", bindCompleted: true, boundByController: true
I0907 20:27:10.284683       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-8rm8b"
I0907 20:27:10.301155       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-4c76m, uid e757527a-59af-4728-9099-e1295c846ded, event type delete
E0907 20:27:10.314790       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-kgh8p" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0907 20:27:10.401407       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0907 20:27:10.401462       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid dd5cb5f2-e8ba-4769-a7a3-2ccb6d3ad1fe, event type delete
I0907 20:27:10.401695       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (1.9µs)
I0907 20:27:10.424607       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid 17fe8929-e950-463e-a744-4a7ede1a8231, event type delete
I0907 20:27:10.427414       1 publisher.go:186] Finished syncing namespace "azuredisk-2540" (2.549631ms)
I0907 20:27:10.431828       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.6µs)
I0907 20:27:10.435377       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0907 20:27:10.453906       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (204.030784ms)
I0907 20:27:10.592576       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0907 20:27:10.639304       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-89bsw, uid a50fa64e-f62d-40b3-b6fc-b865dc153783, event type delete
E0907 20:27:10.653437       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-9xlxh" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0907 20:27:10.662971       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid 723960d8-9d0b-4fa4-8b07-383757b213ac, event type delete
I0907 20:27:10.664875       1 publisher.go:186] Finished syncing namespace "azuredisk-4728" (1.833166ms)
I0907 20:27:10.670141       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid 083bb4ab-79be-4f5f-b0df-96f731f2ddf2, event type delete
I0907 20:27:10.671151       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0907 20:27:10.671589       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (3.2µs)
I0907 20:27:10.747177       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4728, estimate: 0, errors: <nil>
... skipping 8 lines ...
I0907 20:27:10.819242       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-fq7pp"
I0907 20:27:10.859096       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a") from node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:27:10.926122       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5466
I0907 20:27:10.956065       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:27:10.969603       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5466, name default-token-khg67, uid d51a81c5-2b52-45ed-a9ed-2e02cc3fc1db, event type delete
I0907 20:27:10.992333       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5466, name kube-root-ca.crt, uid 53b60ea3-5e19-4c95-929b-388cefacbba1, event type delete
E0907 20:27:10.995042       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5466/default: secrets "default-token-66884" is forbidden: unable to create new content in namespace azuredisk-5466 because it is being terminated
I0907 20:27:10.995041       1 publisher.go:186] Finished syncing namespace "azuredisk-5466" (2.433321ms)
I0907 20:27:11.000797       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" lun 0 to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:27:11.000866       1 azure_controller_standard.go:93] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - attach disk(capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) with DiskEncryptionSetID()
I0907 20:27:11.048118       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5466/default), service account deleted, removing tokens
I0907 20:27:11.048175       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5466, name default, uid a6dea501-09ae-4226-9632-4f7956cd4dbb, event type delete
I0907 20:27:11.048208       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (1.9µs)
... skipping 9 lines ...
I0907 20:27:11.349613       1 publisher.go:186] Finished syncing namespace "azuredisk-2790" (1.868469ms)
I0907 20:27:11.428078       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (4.2µs)
I0907 20:27:11.428400       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2790, estimate: 0, errors: <nil>
I0907 20:27:11.436675       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2790" (181.816569ms)
I0907 20:27:11.602271       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0907 20:27:11.719554       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-hnz96, uid d949de15-2774-4a34-ba81-782881170936, event type delete
E0907 20:27:11.734890       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-5pvdh" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0907 20:27:11.765705       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0907 20:27:11.765909       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid a36742eb-0f49-4b81-befd-65d3156907ce, event type delete
I0907 20:27:11.766081       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (3.2µs)
I0907 20:27:11.771589       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid af98dcfc-09e1-4316-9254-826761fd16e7, event type delete
I0907 20:27:11.774138       1 publisher.go:186] Finished syncing namespace "azuredisk-5356" (2.43142ms)
I0907 20:27:11.802969       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (3.1µs)
I0907 20:27:11.804817       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5356, estimate: 0, errors: <nil>
I0907 20:27:11.817833       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5356" (218.000647ms)
I0907 20:27:11.934179       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5194
I0907 20:27:11.966835       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5194, name kube-root-ca.crt, uid 4a356c62-8cef-475b-81df-156aae2f8f93, event type delete
I0907 20:27:11.969754       1 publisher.go:186] Finished syncing namespace "azuredisk-5194" (2.857359ms)
I0907 20:27:12.043331       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5194, name default-token-6tjkd, uid 7c005264-e572-4467-a2ab-d48933987b6a, event type delete
E0907 20:27:12.066306       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5194/default: secrets "default-token-v9pbv" is forbidden: unable to create new content in namespace azuredisk-5194 because it is being terminated
I0907 20:27:12.099587       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5194/default), service account deleted, removing tokens
I0907 20:27:12.100096       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5194, name default, uid 3017da50-41d5-47d9-a7da-117d28ec15ba, event type delete
I0907 20:27:12.100129       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (3.4µs)
I0907 20:27:12.127464       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5194" (2.5µs)
I0907 20:27:12.128036       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5194, estimate: 0, errors: <nil>
I0907 20:27:12.139595       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5194" (208.235161ms)
... skipping 160 lines ...
I0907 20:27:36.595694       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: claim azuredisk-1353/pvc-8rm8b not found
I0907 20:27:36.595934       1 pv_controller.go:1108] reclaimVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: policy is Delete
I0907 20:27:36.596095       1 pv_controller.go:1752] scheduleOperation[delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]]
I0907 20:27:36.596251       1 pv_controller.go:1763] operation "delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]" is already running, skipping
I0907 20:27:36.597546       1 pv_controller.go:1340] isVolumeReleased[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is released
I0907 20:27:36.597583       1 pv_controller.go:1404] doDeleteVolume [pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]
I0907 20:27:36.640471       1 pv_controller.go:1259] deletion of volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:27:36.640503       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: set phase Failed
I0907 20:27:36.640513       1 pv_controller.go:858] updating PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: set phase Failed
I0907 20:27:36.646240       1 pv_protection_controller.go:205] Got event on PV pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a
I0907 20:27:36.646284       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" with version 1299
I0907 20:27:36.646313       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: phase: Failed, bound to: "azuredisk-1353/pvc-8rm8b (uid: 00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a)", boundByController: true
I0907 20:27:36.646342       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is bound to claim azuredisk-1353/pvc-8rm8b
I0907 20:27:36.646364       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: claim azuredisk-1353/pvc-8rm8b not found
I0907 20:27:36.646377       1 pv_controller.go:1108] reclaimVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: policy is Delete
I0907 20:27:36.646392       1 pv_controller.go:1752] scheduleOperation[delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]]
I0907 20:27:36.646428       1 pv_controller.go:1763] operation "delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]" is already running, skipping
I0907 20:27:36.646258       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" with version 1299
I0907 20:27:36.646764       1 pv_controller.go:879] volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" entered phase "Failed"
I0907 20:27:36.647022       1 pv_controller.go:901] volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
E0907 20:27:36.647531       1 goroutinemap.go:150] Operation for "delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]" failed. No retries permitted until 2022-09-07 20:27:37.147486246 +0000 UTC m=+277.348444896 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:27:36.647996       1 event.go:291] "Event occurred" object="pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted"
I0907 20:27:38.425577       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:27:38.425941       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a to the node "capz-yx2tsa-md-0-dtt5p" mounted false
I0907 20:27:38.476095       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-dtt5p" succeeded. VolumesAttached: []
I0907 20:27:38.476387       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a") on node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:27:38.476964       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
... skipping 3 lines ...
I0907 20:27:38.520480       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a"
I0907 20:27:38.520509       1 azure_controller_standard.go:166] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a)
I0907 20:27:42.568454       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:27:42.572607       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:27:42.663790       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:27:42.663860       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" with version 1299
I0907 20:27:42.663902       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: phase: Failed, bound to: "azuredisk-1353/pvc-8rm8b (uid: 00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a)", boundByController: true
I0907 20:27:42.663941       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is bound to claim azuredisk-1353/pvc-8rm8b
I0907 20:27:42.663970       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: claim azuredisk-1353/pvc-8rm8b not found
I0907 20:27:42.663981       1 pv_controller.go:1108] reclaimVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: policy is Delete
I0907 20:27:42.663999       1 pv_controller.go:1752] scheduleOperation[delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]]
I0907 20:27:42.664040       1 pv_controller.go:1231] deleteVolumeOperation [pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a] started
I0907 20:27:42.668769       1 pv_controller.go:1340] isVolumeReleased[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is released
I0907 20:27:42.668792       1 pv_controller.go:1404] doDeleteVolume [pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]
I0907 20:27:42.668825       1 pv_controller.go:1259] deletion of volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) since it's in attaching or detaching state
I0907 20:27:42.668835       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: set phase Failed
I0907 20:27:42.668842       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: phase Failed already set
E0907 20:27:42.668863       1 goroutinemap.go:150] Operation for "delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]" failed. No retries permitted until 2022-09-07 20:27:43.668849011 +0000 UTC m=+283.869807661 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) since it's in attaching or detaching state
I0907 20:27:42.719957       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-dtt5p ReadyCondition updated. Updating timestamp.
I0907 20:27:42.843488       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="60.205µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:42770" resp=200
I0907 20:27:43.020902       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:27:48.437887       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:27:48.437925       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a to the node "capz-yx2tsa-md-0-dtt5p" mounted false
I0907 20:27:52.634510       1 gc_controller.go:161] GC'ing orphaned
... skipping 4 lines ...
I0907 20:27:53.921170       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a) succeeded
I0907 20:27:53.921363       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a was detached from node:capz-yx2tsa-md-0-dtt5p
I0907 20:27:53.921605       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a") on node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:27:57.573544       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:27:57.663964       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:27:57.664290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" with version 1299
I0907 20:27:57.664346       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: phase: Failed, bound to: "azuredisk-1353/pvc-8rm8b (uid: 00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a)", boundByController: true
I0907 20:27:57.664516       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is bound to claim azuredisk-1353/pvc-8rm8b
I0907 20:27:57.664617       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: claim azuredisk-1353/pvc-8rm8b not found
I0907 20:27:57.664633       1 pv_controller.go:1108] reclaimVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: policy is Delete
I0907 20:27:57.664728       1 pv_controller.go:1752] scheduleOperation[delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]]
I0907 20:27:57.664879       1 pv_controller.go:1231] deleteVolumeOperation [pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a] started
I0907 20:27:57.669398       1 pv_controller.go:1340] isVolumeReleased[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is released
... skipping 2 lines ...
I0907 20:28:02.888567       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a
I0907 20:28:02.888789       1 pv_controller.go:1435] volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" deleted
I0907 20:28:02.888962       1 pv_controller.go:1283] deleteVolumeOperation [pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: success
I0907 20:28:02.896074       1 pv_protection_controller.go:205] Got event on PV pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a
I0907 20:28:02.896205       1 pv_protection_controller.go:125] Processing PV pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a
I0907 20:28:02.896743       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" with version 1341
I0907 20:28:02.896785       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: phase: Failed, bound to: "azuredisk-1353/pvc-8rm8b (uid: 00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a)", boundByController: true
I0907 20:28:02.896977       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: volume is bound to claim azuredisk-1353/pvc-8rm8b
I0907 20:28:02.897004       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: claim azuredisk-1353/pvc-8rm8b not found
I0907 20:28:02.897028       1 pv_controller.go:1108] reclaimVolume[pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a]: policy is Delete
I0907 20:28:02.897184       1 pv_controller.go:1752] scheduleOperation[delete-pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a[debb7d09-fbd2-4559-95b9-c08cf8e08875]]
I0907 20:28:02.897285       1 pv_controller.go:1231] deleteVolumeOperation [pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a] started
I0907 20:28:02.904570       1 pv_controller.go:1243] Volume "pvc-00bd71c0-b3bb-472f-b705-4cf0e5cc8c1a" is already being deleted
... skipping 118 lines ...
I0907 20:28:12.072850       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-fq7pp.1712adee4aa15c2e, uid 16470ca6-8576-4811-a1ec-dbdbb3ae9c6f, event type delete
I0907 20:28:12.076769       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" lun 0 to node "capz-yx2tsa-md-0-r4w9v".
I0907 20:28:12.076842       1 azure_controller_standard.go:93] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - attach disk(capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) with DiskEncryptionSetID()
I0907 20:28:12.077966       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-8rm8b.1712ade8c7d290ab, uid 2b25cf0f-169d-4610-985e-7e205889a52d, event type delete
I0907 20:28:12.082196       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-8rm8b.1712ade95d573629, uid bdf5f7e8-dedb-4c4c-ade0-c036388df3a6, event type delete
I0907 20:28:12.155022       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-7r7p5, uid 1ab91da6-bf4a-43b6-8790-d2a542a8c14d, event type delete
E0907 20:28:12.168728       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-42sm9" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0907 20:28:12.169354       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1353/default), service account deleted, removing tokens
I0907 20:28:12.170781       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1353, name default, uid f614af04-d74e-4a96-aadf-40f4885e8f59, event type delete
I0907 20:28:12.170810       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (2.9µs)
I0907 20:28:12.202334       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (2.6µs)
I0907 20:28:12.204875       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1353, estimate: 0, errors: <nil>
I0907 20:28:12.221532       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1353" (234.704446ms)
... skipping 46 lines ...
I0907 20:28:12.843952       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="86.208µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51950" resp=200
I0907 20:28:13.068998       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:28:13.193646       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-156
I0907 20:28:13.231239       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-156, name kube-root-ca.crt, uid 4c7aea91-2cf9-4c9c-af26-d0075144e69d, event type delete
I0907 20:28:13.233910       1 publisher.go:186] Finished syncing namespace "azuredisk-156" (2.609838ms)
I0907 20:28:13.270307       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-156, name default-token-7hjkr, uid 457271c0-8adb-4e4d-8f83-f5d0d9c66ab0, event type delete
E0907 20:28:13.294484       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-156/default: secrets "default-token-j56dk" is forbidden: unable to create new content in namespace azuredisk-156 because it is being terminated
I0907 20:28:13.357609       1 tokens_controller.go:252] syncServiceAccount(azuredisk-156/default), service account deleted, removing tokens
I0907 20:28:13.357881       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-156, name default, uid b6077769-fd73-4ef8-be33-81161d027bdd, event type delete
I0907 20:28:13.357924       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-156" (19.102µs)
I0907 20:28:13.376021       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-156" (2.5µs)
I0907 20:28:13.376051       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-156, estimate: 0, errors: <nil>
I0907 20:28:13.385726       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-156" (194.872305ms)
... skipping 351 lines ...
I0907 20:30:16.316021       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: claim azuredisk-1563/pvc-zl6mb not found
I0907 20:30:16.316181       1 pv_controller.go:1108] reclaimVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: policy is Delete
I0907 20:30:16.316353       1 pv_controller.go:1752] scheduleOperation[delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]]
I0907 20:30:16.316457       1 pv_controller.go:1763] operation "delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]" is already running, skipping
I0907 20:30:16.318387       1 pv_controller.go:1340] isVolumeReleased[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is released
I0907 20:30:16.318404       1 pv_controller.go:1404] doDeleteVolume [pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]
I0907 20:30:16.354798       1 pv_controller.go:1259] deletion of volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:30:16.354825       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: set phase Failed
I0907 20:30:16.354836       1 pv_controller.go:858] updating PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: set phase Failed
I0907 20:30:16.363239       1 pv_protection_controller.go:205] Got event on PV pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e
I0907 20:30:16.363508       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" with version 1604
I0907 20:30:16.363695       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: phase: Failed, bound to: "azuredisk-1563/pvc-zl6mb (uid: 47d40d72-04ad-48f7-a29d-0354f0dfd82e)", boundByController: true
I0907 20:30:16.363855       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is bound to claim azuredisk-1563/pvc-zl6mb
I0907 20:30:16.364014       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: claim azuredisk-1563/pvc-zl6mb not found
I0907 20:30:16.364150       1 pv_controller.go:1108] reclaimVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: policy is Delete
I0907 20:30:16.364302       1 pv_controller.go:1752] scheduleOperation[delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]]
I0907 20:30:16.364440       1 pv_controller.go:1763] operation "delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]" is already running, skipping
I0907 20:30:16.364959       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" with version 1604
I0907 20:30:16.364987       1 pv_controller.go:879] volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" entered phase "Failed"
I0907 20:30:16.364998       1 pv_controller.go:901] volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:30:16.365178       1 goroutinemap.go:150] Operation for "delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]" failed. No retries permitted until 2022-09-07 20:30:16.865161802 +0000 UTC m=+437.066120452 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:30:16.365521       1 event.go:291] "Event occurred" object="pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:30:18.971650       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 20 items received
I0907 20:30:20.694424       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:30:20.694467       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:30:20.769705       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-r4w9v" succeeded. VolumesAttached: []
I0907 20:30:20.769820       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e") on node "capz-yx2tsa-md-0-r4w9v" 
... skipping 6 lines ...
I0907 20:30:22.743777       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-r4w9v ReadyCondition updated. Updating timestamp.
I0907 20:30:22.843447       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="84.309µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55812" resp=200
I0907 20:30:23.162417       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:30:27.583054       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:27.670686       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:30:27.670769       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" with version 1604
I0907 20:30:27.670812       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: phase: Failed, bound to: "azuredisk-1563/pvc-zl6mb (uid: 47d40d72-04ad-48f7-a29d-0354f0dfd82e)", boundByController: true
I0907 20:30:27.670856       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is bound to claim azuredisk-1563/pvc-zl6mb
I0907 20:30:27.670883       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: claim azuredisk-1563/pvc-zl6mb not found
I0907 20:30:27.670896       1 pv_controller.go:1108] reclaimVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: policy is Delete
I0907 20:30:27.670913       1 pv_controller.go:1752] scheduleOperation[delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]]
I0907 20:30:27.670954       1 pv_controller.go:1231] deleteVolumeOperation [pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e] started
I0907 20:30:27.676488       1 pv_controller.go:1340] isVolumeReleased[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is released
I0907 20:30:27.676522       1 pv_controller.go:1404] doDeleteVolume [pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]
I0907 20:30:27.676566       1 pv_controller.go:1259] deletion of volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) since it's in attaching or detaching state
I0907 20:30:27.676590       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: set phase Failed
I0907 20:30:27.676606       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: phase Failed already set
E0907 20:30:27.676648       1 goroutinemap.go:150] Operation for "delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]" failed. No retries permitted until 2022-09-07 20:30:28.676617666 +0000 UTC m=+448.877576416 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) since it's in attaching or detaching state
I0907 20:30:31.620878       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0907 20:30:31.683445       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 16 items received
I0907 20:30:32.647103       1 gc_controller.go:161] GC'ing orphaned
I0907 20:30:32.647136       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:30:32.843304       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.006µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45002" resp=200
I0907 20:30:36.168982       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) returned with <nil>
I0907 20:30:36.169023       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e) succeeded
I0907 20:30:36.169033       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e was detached from node:capz-yx2tsa-md-0-r4w9v
I0907 20:30:36.169059       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e") on node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:30:42.572793       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:42.584038       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:30:42.671346       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:30:42.671427       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" with version 1604
I0907 20:30:42.671469       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: phase: Failed, bound to: "azuredisk-1563/pvc-zl6mb (uid: 47d40d72-04ad-48f7-a29d-0354f0dfd82e)", boundByController: true
I0907 20:30:42.671510       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is bound to claim azuredisk-1563/pvc-zl6mb
I0907 20:30:42.671536       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: claim azuredisk-1563/pvc-zl6mb not found
I0907 20:30:42.671549       1 pv_controller.go:1108] reclaimVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: policy is Delete
I0907 20:30:42.671567       1 pv_controller.go:1752] scheduleOperation[delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]]
I0907 20:30:42.671609       1 pv_controller.go:1231] deleteVolumeOperation [pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e] started
I0907 20:30:42.675568       1 pv_controller.go:1340] isVolumeReleased[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is released
... skipping 5 lines ...
I0907 20:30:47.860943       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e
I0907 20:30:47.861009       1 pv_controller.go:1435] volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" deleted
I0907 20:30:47.861248       1 pv_controller.go:1283] deleteVolumeOperation [pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: success
I0907 20:30:47.870823       1 pv_protection_controller.go:205] Got event on PV pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e
I0907 20:30:47.870854       1 pv_protection_controller.go:125] Processing PV pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e
I0907 20:30:47.871309       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" with version 1653
I0907 20:30:47.871362       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: phase: Failed, bound to: "azuredisk-1563/pvc-zl6mb (uid: 47d40d72-04ad-48f7-a29d-0354f0dfd82e)", boundByController: true
I0907 20:30:47.871391       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: volume is bound to claim azuredisk-1563/pvc-zl6mb
I0907 20:30:47.871413       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: claim azuredisk-1563/pvc-zl6mb not found
I0907 20:30:47.871429       1 pv_controller.go:1108] reclaimVolume[pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e]: policy is Delete
I0907 20:30:47.871446       1 pv_controller.go:1752] scheduleOperation[delete-pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e[0edeef18-a2ae-4543-8e6c-87a14b5fb6bd]]
I0907 20:30:47.871511       1 pv_controller.go:1231] deleteVolumeOperation [pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e] started
I0907 20:30:47.876419       1 pv_controller.go:1243] Volume "pvc-47d40d72-04ad-48f7-a29d-0354f0dfd82e" is already being deleted
... skipping 252 lines ...
I0907 20:31:10.725620       1 pv_controller.go:1108] reclaimVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: policy is Delete
I0907 20:31:10.725631       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]]
I0907 20:31:10.725726       1 pv_controller.go:1763] operation "delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]" is already running, skipping
I0907 20:31:10.725842       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b] started
I0907 20:31:10.735159       1 pv_controller.go:1340] isVolumeReleased[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is released
I0907 20:31:10.735179       1 pv_controller.go:1404] doDeleteVolume [pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]
I0907 20:31:10.760943       1 pv_controller.go:1259] deletion of volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:31:10.761213       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: set phase Failed
I0907 20:31:10.761400       1 pv_controller.go:858] updating PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: set phase Failed
I0907 20:31:10.766359       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" with version 1749
I0907 20:31:10.767153       1 pv_controller.go:879] volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" entered phase "Failed"
I0907 20:31:10.767175       1 pv_controller.go:901] volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:31:10.767881       1 goroutinemap.go:150] Operation for "delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]" failed. No retries permitted until 2022-09-07 20:31:11.26735271 +0000 UTC m=+491.468311460 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:31:10.768300       1 event.go:291] "Event occurred" object="pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:31:10.770175       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:31:10.770207       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:31:10.770977       1 pv_protection_controller.go:205] Got event on PV pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b
I0907 20:31:10.771275       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" with version 1749
I0907 20:31:10.771980       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: phase: Failed, bound to: "azuredisk-7463/pvc-48xng (uid: 1a45d52a-2b48-429b-a269-7da2d1697a9b)", boundByController: true
I0907 20:31:10.772233       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is bound to claim azuredisk-7463/pvc-48xng
I0907 20:31:10.772373       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: claim azuredisk-7463/pvc-48xng not found
I0907 20:31:10.772389       1 pv_controller.go:1108] reclaimVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: policy is Delete
I0907 20:31:10.772406       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]]
I0907 20:31:10.772416       1 pv_controller.go:1765] operation "delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]" postponed due to exponential backoff
I0907 20:31:10.806710       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-r4w9v" succeeded. VolumesAttached: []
... skipping 8 lines ...
I0907 20:31:12.581148       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 11 items received
I0907 20:31:12.584366       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:31:12.648911       1 gc_controller.go:161] GC'ing orphaned
I0907 20:31:12.648968       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:31:12.672509       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:31:12.672584       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" with version 1749
I0907 20:31:12.672623       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: phase: Failed, bound to: "azuredisk-7463/pvc-48xng (uid: 1a45d52a-2b48-429b-a269-7da2d1697a9b)", boundByController: true
I0907 20:31:12.672670       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is bound to claim azuredisk-7463/pvc-48xng
I0907 20:31:12.672712       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: claim azuredisk-7463/pvc-48xng not found
I0907 20:31:12.672726       1 pv_controller.go:1108] reclaimVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: policy is Delete
I0907 20:31:12.672745       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]]
I0907 20:31:12.672789       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b] started
I0907 20:31:12.675511       1 pv_controller.go:1340] isVolumeReleased[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is released
I0907 20:31:12.675531       1 pv_controller.go:1404] doDeleteVolume [pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]
I0907 20:31:12.675565       1 pv_controller.go:1259] deletion of volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b) since it's in attaching or detaching state
I0907 20:31:12.675585       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: set phase Failed
I0907 20:31:12.675597       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: phase Failed already set
E0907 20:31:12.675625       1 goroutinemap.go:150] Operation for "delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]" failed. No retries permitted until 2022-09-07 20:31:13.675606596 +0000 UTC m=+493.876565246 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b) since it's in attaching or detaching state
I0907 20:31:12.748737       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-r4w9v ReadyCondition updated. Updating timestamp.
I0907 20:31:12.843312       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="107.811µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:60190" resp=200
I0907 20:31:13.231897       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:31:14.590532       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 21 items received
I0907 20:31:20.780259       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:31:20.780534       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b to the node "capz-yx2tsa-md-0-r4w9v" mounted false
... skipping 7 lines ...
I0907 20:31:26.180928       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b was detached from node:capz-yx2tsa-md-0-r4w9v
I0907 20:31:26.180954       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b") on node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:31:26.768578       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:31:27.584720       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:31:27.673209       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:31:27.673316       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" with version 1749
I0907 20:31:27.673378       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: phase: Failed, bound to: "azuredisk-7463/pvc-48xng (uid: 1a45d52a-2b48-429b-a269-7da2d1697a9b)", boundByController: true
I0907 20:31:27.673444       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is bound to claim azuredisk-7463/pvc-48xng
I0907 20:31:27.673499       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: claim azuredisk-7463/pvc-48xng not found
I0907 20:31:27.673514       1 pv_controller.go:1108] reclaimVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: policy is Delete
I0907 20:31:27.673539       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]]
I0907 20:31:27.673605       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b] started
I0907 20:31:27.679117       1 pv_controller.go:1340] isVolumeReleased[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is released
... skipping 9 lines ...
I0907 20:31:32.854634       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b
I0907 20:31:32.854661       1 pv_controller.go:1435] volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" deleted
I0907 20:31:32.854673       1 pv_controller.go:1283] deleteVolumeOperation [pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: success
I0907 20:31:32.861695       1 pv_protection_controller.go:205] Got event on PV pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b
I0907 20:31:32.861732       1 pv_protection_controller.go:125] Processing PV pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b
I0907 20:31:32.862051       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" with version 1783
I0907 20:31:32.862096       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: phase: Failed, bound to: "azuredisk-7463/pvc-48xng (uid: 1a45d52a-2b48-429b-a269-7da2d1697a9b)", boundByController: true
I0907 20:31:32.862127       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: volume is bound to claim azuredisk-7463/pvc-48xng
I0907 20:31:32.862157       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: claim azuredisk-7463/pvc-48xng not found
I0907 20:31:32.862172       1 pv_controller.go:1108] reclaimVolume[pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b]: policy is Delete
I0907 20:31:32.862190       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]]
I0907 20:31:32.862204       1 pv_controller.go:1763] operation "delete-pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b[943b3444-526d-48ee-89e1-547007b057c8]" is already running, skipping
I0907 20:31:32.866811       1 pv_controller_base.go:235] volume "pvc-1a45d52a-2b48-429b-a269-7da2d1697a9b" deleted
... skipping 104 lines ...
I0907 20:31:39.789138       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893") from node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:31:39.837401       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:31:39.882693       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" lun 0 to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:31:39.882768       1 azure_controller_standard.go:93] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - attach disk(capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) with DiskEncryptionSetID()
I0907 20:31:41.107552       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7463
I0907 20:31:41.155121       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7463, name default-token-gw7v2, uid 9584d50b-a948-443b-9ccb-ebfa2af2e423, event type delete
E0907 20:31:41.169377       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7463/default: secrets "default-token-hcj9r" is forbidden: unable to create new content in namespace azuredisk-7463 because it is being terminated
I0907 20:31:41.190048       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7463/default), service account deleted, removing tokens
I0907 20:31:41.190266       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7463, name default, uid 872d8483-0427-4030-9336-aa70dc1adeee, event type delete
I0907 20:31:41.190308       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7463" (2.8µs)
I0907 20:31:41.203690       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-4pq2l.1712ae1d3f2c17ed, uid 1c61879c-9331-475c-8241-26b656c57923, event type delete
I0907 20:31:41.208279       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-4pq2l.1712ae1fadccee0d, uid 22f6dea8-7efe-4919-a483-170ae266b462, event type delete
I0907 20:31:41.212133       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7463, name azuredisk-volume-tester-4pq2l.1712ae201cd37479, uid 81edd179-f83d-40b9-be80-75f19a939b81, event type delete
... skipping 132 lines ...
I0907 20:31:55.345884       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: claim azuredisk-9241/pvc-52grh not found
I0907 20:31:55.345978       1 pv_controller.go:1108] reclaimVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: policy is Delete
I0907 20:31:55.346115       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]]
I0907 20:31:55.346216       1 pv_controller.go:1763] operation "delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]" is already running, skipping
I0907 20:31:55.347935       1 pv_controller.go:1340] isVolumeReleased[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is released
I0907 20:31:55.347953       1 pv_controller.go:1404] doDeleteVolume [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]
I0907 20:31:55.408599       1 pv_controller.go:1259] deletion of volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:31:55.408627       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: set phase Failed
I0907 20:31:55.408638       1 pv_controller.go:858] updating PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: set phase Failed
I0907 20:31:55.412849       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" with version 1875
I0907 20:31:55.413070       1 pv_controller.go:879] volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" entered phase "Failed"
I0907 20:31:55.413285       1 pv_controller.go:901] volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
E0907 20:31:55.413509       1 goroutinemap.go:150] Operation for "delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]" failed. No retries permitted until 2022-09-07 20:31:55.913488951 +0000 UTC m=+536.114447601 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:31:55.412884       1 pv_protection_controller.go:205] Got event on PV pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893
I0907 20:31:55.412968       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" with version 1875
I0907 20:31:55.415303       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase: Failed, bound to: "azuredisk-9241/pvc-52grh (uid: ff9b4f66-0293-42fa-b7c5-1c49c76c2893)", boundByController: true
I0907 20:31:55.415496       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is bound to claim azuredisk-9241/pvc-52grh
I0907 20:31:55.415707       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: claim azuredisk-9241/pvc-52grh not found
I0907 20:31:55.415846       1 pv_controller.go:1108] reclaimVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: policy is Delete
I0907 20:31:55.416000       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]]
I0907 20:31:55.416130       1 pv_controller.go:1765] operation "delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]" postponed due to exponential backoff
I0907 20:31:55.416320       1 event.go:291] "Event occurred" object="pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted"
I0907 20:31:57.586459       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:31:57.674562       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:31:57.674656       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" with version 1875
I0907 20:31:57.674839       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase: Failed, bound to: "azuredisk-9241/pvc-52grh (uid: ff9b4f66-0293-42fa-b7c5-1c49c76c2893)", boundByController: true
I0907 20:31:57.674903       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is bound to claim azuredisk-9241/pvc-52grh
I0907 20:31:57.674946       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: claim azuredisk-9241/pvc-52grh not found
I0907 20:31:57.674961       1 pv_controller.go:1108] reclaimVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: policy is Delete
I0907 20:31:57.674989       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]]
I0907 20:31:57.675039       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893] started
I0907 20:31:57.685768       1 pv_controller.go:1340] isVolumeReleased[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is released
I0907 20:31:57.685788       1 pv_controller.go:1404] doDeleteVolume [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]
I0907 20:31:57.708484       1 pv_controller.go:1259] deletion of volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:31:57.708509       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: set phase Failed
I0907 20:31:57.708519       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase Failed already set
E0907 20:31:57.708571       1 goroutinemap.go:150] Operation for "delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]" failed. No retries permitted until 2022-09-07 20:31:58.708528013 +0000 UTC m=+538.909486763 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:31:58.631983       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:31:58.632217       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 to the node "capz-yx2tsa-md-0-dtt5p" mounted false
I0907 20:31:58.665758       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-dtt5p" succeeded. VolumesAttached: []
I0907 20:31:58.665999       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893") on node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:31:58.666150       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:31:58.666278       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 to the node "capz-yx2tsa-md-0-dtt5p" mounted false
... skipping 8 lines ...
I0907 20:32:12.575117       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:32:12.587370       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:32:12.651816       1 gc_controller.go:161] GC'ing orphaned
I0907 20:32:12.651849       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:32:12.675224       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:32:12.675301       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" with version 1875
I0907 20:32:12.675341       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase: Failed, bound to: "azuredisk-9241/pvc-52grh (uid: ff9b4f66-0293-42fa-b7c5-1c49c76c2893)", boundByController: true
I0907 20:32:12.675381       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is bound to claim azuredisk-9241/pvc-52grh
I0907 20:32:12.675406       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: claim azuredisk-9241/pvc-52grh not found
I0907 20:32:12.675416       1 pv_controller.go:1108] reclaimVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: policy is Delete
I0907 20:32:12.675433       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]]
I0907 20:32:12.675461       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893] started
I0907 20:32:12.678847       1 pv_controller.go:1340] isVolumeReleased[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is released
I0907 20:32:12.678874       1 pv_controller.go:1404] doDeleteVolume [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]
I0907 20:32:12.678908       1 pv_controller.go:1259] deletion of volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) since it's in attaching or detaching state
I0907 20:32:12.678963       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: set phase Failed
I0907 20:32:12.678983       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase Failed already set
E0907 20:32:12.679013       1 goroutinemap.go:150] Operation for "delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]" failed. No retries permitted until 2022-09-07 20:32:14.678993404 +0000 UTC m=+554.879952154 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) since it's in attaching or detaching state
I0907 20:32:12.844443       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="62.806µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51856" resp=200
I0907 20:32:13.278695       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:32:14.099679       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) returned with <nil>
I0907 20:32:14.099723       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893) succeeded
I0907 20:32:14.099733       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893 was detached from node:capz-yx2tsa-md-0-dtt5p
I0907 20:32:14.099956       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893") on node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:32:14.153799       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0907 20:32:14.209649       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:32:18.454205       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 11 items received
I0907 20:32:22.842962       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="94.51µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46080" resp=200
I0907 20:32:27.587510       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:32:27.675921       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:32:27.675997       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" with version 1875
I0907 20:32:27.676041       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase: Failed, bound to: "azuredisk-9241/pvc-52grh (uid: ff9b4f66-0293-42fa-b7c5-1c49c76c2893)", boundByController: true
I0907 20:32:27.676179       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is bound to claim azuredisk-9241/pvc-52grh
I0907 20:32:27.676205       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: claim azuredisk-9241/pvc-52grh not found
I0907 20:32:27.676211       1 pv_controller.go:1108] reclaimVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: policy is Delete
I0907 20:32:27.676224       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]]
I0907 20:32:27.676251       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893] started
I0907 20:32:27.680565       1 pv_controller.go:1340] isVolumeReleased[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is released
... skipping 6 lines ...
I0907 20:32:32.885466       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893
I0907 20:32:32.885504       1 pv_controller.go:1435] volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" deleted
I0907 20:32:32.885519       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: success
I0907 20:32:32.893409       1 pv_protection_controller.go:205] Got event on PV pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893
I0907 20:32:32.893448       1 pv_protection_controller.go:125] Processing PV pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893
I0907 20:32:32.893788       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" with version 1930
I0907 20:32:32.893833       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: phase: Failed, bound to: "azuredisk-9241/pvc-52grh (uid: ff9b4f66-0293-42fa-b7c5-1c49c76c2893)", boundByController: true
I0907 20:32:32.893863       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: volume is bound to claim azuredisk-9241/pvc-52grh
I0907 20:32:32.893890       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: claim azuredisk-9241/pvc-52grh not found
I0907 20:32:32.893907       1 pv_controller.go:1108] reclaimVolume[pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893]: policy is Delete
I0907 20:32:32.893925       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893[2f6e53f9-5749-4869-b81a-24dbb37a6cf3]]
I0907 20:32:32.893954       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893] started
I0907 20:32:32.897595       1 pv_controller.go:1243] Volume "pvc-ff9b4f66-0293-42fa-b7c5-1c49c76c2893" is already being deleted
... skipping 851 lines ...
I0907 20:33:57.767102       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]]
I0907 20:33:57.767130       1 pv_controller.go:1763] operation "delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]" is already running, skipping
I0907 20:33:57.767190       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127] started
I0907 20:33:57.766501       1 pv_protection_controller.go:205] Got event on PV pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127
I0907 20:33:57.769039       1 pv_controller.go:1340] isVolumeReleased[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: volume is released
I0907 20:33:57.769057       1 pv_controller.go:1404] doDeleteVolume [pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]
I0907 20:33:57.799424       1 pv_controller.go:1259] deletion of volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:33:57.799457       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: set phase Failed
I0907 20:33:57.799468       1 pv_controller.go:858] updating PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: set phase Failed
I0907 20:33:57.803217       1 pv_protection_controller.go:205] Got event on PV pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127
I0907 20:33:57.803533       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" with version 2154
I0907 20:33:57.803814       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: phase: Failed, bound to: "azuredisk-9336/pvc-9wjbk (uid: d828adc0-225c-472a-8ce0-3a9e4bdbe127)", boundByController: true
I0907 20:33:57.804107       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: volume is bound to claim azuredisk-9336/pvc-9wjbk
I0907 20:33:57.804314       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: claim azuredisk-9336/pvc-9wjbk not found
I0907 20:33:57.804511       1 pv_controller.go:1108] reclaimVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: policy is Delete
I0907 20:33:57.804770       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]]
I0907 20:33:57.804964       1 pv_controller.go:1763] operation "delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]" is already running, skipping
I0907 20:33:57.805365       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" with version 2154
I0907 20:33:57.805640       1 pv_controller.go:879] volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" entered phase "Failed"
I0907 20:33:57.805823       1 pv_controller.go:901] volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:33:57.806152       1 goroutinemap.go:150] Operation for "delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]" failed. No retries permitted until 2022-09-07 20:33:58.306117213 +0000 UTC m=+658.507075964 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:33:57.806501       1 event.go:291] "Event occurred" object="pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:34:00.959374       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:34:00.959411       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 to the node "capz-yx2tsa-md-0-r4w9v" mounted true
I0907 20:34:00.959424       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127 to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:34:00.995918       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2\"}]}}" for node "capz-yx2tsa-md-0-r4w9v" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 0}]
I0907 20:34:00.996020       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127") on node "capz-yx2tsa-md-0-r4w9v" 
... skipping 9 lines ...
I0907 20:34:12.576804       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:34:12.592960       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:34:12.655931       1 gc_controller.go:161] GC'ing orphaned
I0907 20:34:12.655966       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:34:12.682846       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:34:12.683194       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" with version 2154
I0907 20:34:12.683416       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: phase: Failed, bound to: "azuredisk-9336/pvc-9wjbk (uid: d828adc0-225c-472a-8ce0-3a9e4bdbe127)", boundByController: true
I0907 20:34:12.683623       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: volume is bound to claim azuredisk-9336/pvc-9wjbk
I0907 20:34:12.683651       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: claim azuredisk-9336/pvc-9wjbk not found
I0907 20:34:12.683661       1 pv_controller.go:1108] reclaimVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: policy is Delete
I0907 20:34:12.683680       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]]
I0907 20:34:12.683708       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" with version 1955
I0907 20:34:12.683430       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-k94wm" with version 1958
... skipping 41 lines ...
I0907 20:34:12.684300       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl found: phase: Bound, bound to: "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9", bindCompleted: true, boundByController: true
I0907 20:34:12.684313       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: all is bound
I0907 20:34:12.684324       1 pv_controller.go:858] updating PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: set phase Bound
I0907 20:34:12.684333       1 pv_controller.go:861] updating PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase Bound already set
I0907 20:34:12.687475       1 pv_controller.go:1340] isVolumeReleased[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: volume is released
I0907 20:34:12.687504       1 pv_controller.go:1404] doDeleteVolume [pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]
I0907 20:34:12.687534       1 pv_controller.go:1259] deletion of volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) since it's in attaching or detaching state
I0907 20:34:12.687543       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: set phase Failed
I0907 20:34:12.687550       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: phase Failed already set
E0907 20:34:12.687581       1 goroutinemap.go:150] Operation for "delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]" failed. No retries permitted until 2022-09-07 20:34:13.687558871 +0000 UTC m=+673.888517521 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) since it's in attaching or detaching state
I0907 20:34:12.844240       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="70.907µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49474" resp=200
I0907 20:34:13.375208       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:34:14.834285       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-control-plane-jxjcc"
I0907 20:34:17.779208       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-control-plane-jxjcc ReadyCondition updated. Updating timestamp.
I0907 20:34:21.514586       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) returned with <nil>
I0907 20:34:21.514704       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127) succeeded
... skipping 14 lines ...
I0907 20:34:27.684559       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is bound to claim azuredisk-9336/pvc-dqxfl
I0907 20:34:27.684585       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl found: phase: Bound, bound to: "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9", bindCompleted: true, boundByController: true
I0907 20:34:27.684601       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: all is bound
I0907 20:34:27.684615       1 pv_controller.go:858] updating PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: set phase Bound
I0907 20:34:27.684626       1 pv_controller.go:861] updating PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase Bound already set
I0907 20:34:27.684642       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" with version 2154
I0907 20:34:27.684706       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: phase: Failed, bound to: "azuredisk-9336/pvc-9wjbk (uid: d828adc0-225c-472a-8ce0-3a9e4bdbe127)", boundByController: true
I0907 20:34:27.684734       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: volume is bound to claim azuredisk-9336/pvc-9wjbk
I0907 20:34:27.684762       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: claim azuredisk-9336/pvc-9wjbk not found
I0907 20:34:27.684774       1 pv_controller.go:1108] reclaimVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: policy is Delete
I0907 20:34:27.684791       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]]
I0907 20:34:27.684826       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127] started
I0907 20:34:27.685134       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-k94wm" with version 1958
... skipping 36 lines ...
I0907 20:34:32.858516       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127
I0907 20:34:32.858616       1 pv_controller.go:1435] volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" deleted
I0907 20:34:32.858722       1 pv_controller.go:1283] deleteVolumeOperation [pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: success
I0907 20:34:32.864570       1 pv_protection_controller.go:205] Got event on PV pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127
I0907 20:34:32.864606       1 pv_protection_controller.go:125] Processing PV pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127
I0907 20:34:32.864990       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" with version 2209
I0907 20:34:32.865317       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: phase: Failed, bound to: "azuredisk-9336/pvc-9wjbk (uid: d828adc0-225c-472a-8ce0-3a9e4bdbe127)", boundByController: true
I0907 20:34:32.865434       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: volume is bound to claim azuredisk-9336/pvc-9wjbk
I0907 20:34:32.865556       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: claim azuredisk-9336/pvc-9wjbk not found
I0907 20:34:32.865645       1 pv_controller.go:1108] reclaimVolume[pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127]: policy is Delete
I0907 20:34:32.865728       1 pv_controller.go:1752] scheduleOperation[delete-pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127[45fdc251-c031-48e2-b4e3-b6b21ea1dcb8]]
I0907 20:34:32.865906       1 pv_controller.go:1231] deleteVolumeOperation [pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127] started
I0907 20:34:32.869852       1 pv_controller.go:1243] Volume "pvc-d828adc0-225c-472a-8ce0-3a9e4bdbe127" is already being deleted
... skipping 192 lines ...
I0907 20:35:06.767254       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl not found
I0907 20:35:06.767310       1 pv_controller.go:1108] reclaimVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: policy is Delete
I0907 20:35:06.767391       1 pv_controller.go:1752] scheduleOperation[delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]]
I0907 20:35:06.767410       1 pv_controller.go:1763] operation "delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]" is already running, skipping
I0907 20:35:06.770706       1 pv_controller.go:1340] isVolumeReleased[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is released
I0907 20:35:06.770725       1 pv_controller.go:1404] doDeleteVolume [pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]
I0907 20:35:06.811224       1 pv_controller.go:1259] deletion of volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:35:06.811250       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: set phase Failed
I0907 20:35:06.811259       1 pv_controller.go:858] updating PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: set phase Failed
I0907 20:35:06.814769       1 pv_protection_controller.go:205] Got event on PV pvc-858379f7-fc23-4eef-adb2-478bc45e29e9
I0907 20:35:06.814805       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" with version 2270
I0907 20:35:06.814834       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase: Failed, bound to: "azuredisk-9336/pvc-dqxfl (uid: 858379f7-fc23-4eef-adb2-478bc45e29e9)", boundByController: true
I0907 20:35:06.814859       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is bound to claim azuredisk-9336/pvc-dqxfl
I0907 20:35:06.814877       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl not found
I0907 20:35:06.814886       1 pv_controller.go:1108] reclaimVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: policy is Delete
I0907 20:35:06.814900       1 pv_controller.go:1752] scheduleOperation[delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]]
I0907 20:35:06.814908       1 pv_controller.go:1763] operation "delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]" is already running, skipping
I0907 20:35:06.815431       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" with version 2270
I0907 20:35:06.815454       1 pv_controller.go:879] volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" entered phase "Failed"
I0907 20:35:06.815464       1 pv_controller.go:901] volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
E0907 20:35:06.815502       1 goroutinemap.go:150] Operation for "delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]" failed. No retries permitted until 2022-09-07 20:35:07.315482953 +0000 UTC m=+727.516441603 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:35:06.815652       1 event.go:291] "Event occurred" object="pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted"
I0907 20:35:08.800504       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:35:08.800540       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 to the node "capz-yx2tsa-md-0-dtt5p" mounted false
I0907 20:35:08.832806       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-dtt5p"
I0907 20:35:08.832842       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 to the node "capz-yx2tsa-md-0-dtt5p" mounted false
I0907 20:35:08.835007       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-dtt5p" succeeded. VolumesAttached: []
... skipping 27 lines ...
I0907 20:35:12.686971       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" with version 2270
I0907 20:35:12.686997       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-9336/pvc-k94wm]: already bound to "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2"
I0907 20:35:12.687010       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9336/pvc-k94wm] status: set phase Bound
I0907 20:35:12.687038       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-k94wm] status: phase Bound already set
I0907 20:35:12.687051       1 pv_controller.go:1038] volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" bound to claim "azuredisk-9336/pvc-k94wm"
I0907 20:35:12.687069       1 pv_controller.go:1039] volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-k94wm (uid: ba7665f8-2dc1-4c5b-b289-e8d56b79eda2)", boundByController: true
I0907 20:35:12.686998       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase: Failed, bound to: "azuredisk-9336/pvc-dqxfl (uid: 858379f7-fc23-4eef-adb2-478bc45e29e9)", boundByController: true
I0907 20:35:12.687088       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-k94wm" status after binding: phase: Bound, bound to: "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2", bindCompleted: true, boundByController: true
I0907 20:35:12.687113       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is bound to claim azuredisk-9336/pvc-dqxfl
I0907 20:35:12.687165       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl not found
I0907 20:35:12.687181       1 pv_controller.go:1108] reclaimVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: policy is Delete
I0907 20:35:12.687214       1 pv_controller.go:1752] scheduleOperation[delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]]
I0907 20:35:12.687315       1 pv_controller.go:1231] deleteVolumeOperation [pvc-858379f7-fc23-4eef-adb2-478bc45e29e9] started
I0907 20:35:12.691007       1 pv_controller.go:1340] isVolumeReleased[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is released
I0907 20:35:12.691028       1 pv_controller.go:1404] doDeleteVolume [pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]
I0907 20:35:12.691063       1 pv_controller.go:1259] deletion of volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) since it's in attaching or detaching state
I0907 20:35:12.691080       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: set phase Failed
I0907 20:35:12.691091       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase Failed already set
E0907 20:35:12.691119       1 goroutinemap.go:150] Operation for "delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]" failed. No retries permitted until 2022-09-07 20:35:13.691099927 +0000 UTC m=+733.892058677 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) since it's in attaching or detaching state
I0907 20:35:12.788913       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-dtt5p ReadyCondition updated. Updating timestamp.
I0907 20:35:12.843197       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.007µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35712" resp=200
I0907 20:35:13.415070       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:35:19.359923       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) returned with <nil>
I0907 20:35:19.359974       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9) succeeded
I0907 20:35:19.359985       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9 was detached from node:capz-yx2tsa-md-0-dtt5p
... skipping 12 lines ...
I0907 20:35:27.687296       1 pv_controller.go:861] updating PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase Bound already set
I0907 20:35:27.687299       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-k94wm]: volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" found: phase: Bound, bound to: "azuredisk-9336/pvc-k94wm (uid: ba7665f8-2dc1-4c5b-b289-e8d56b79eda2)", boundByController: true
I0907 20:35:27.687309       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-k94wm]: claim is already correctly bound
I0907 20:35:27.687310       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" with version 2270
I0907 20:35:27.687319       1 pv_controller.go:1012] binding volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" to claim "azuredisk-9336/pvc-k94wm"
I0907 20:35:27.687329       1 pv_controller.go:910] updating PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: binding to "azuredisk-9336/pvc-k94wm"
I0907 20:35:27.687331       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase: Failed, bound to: "azuredisk-9336/pvc-dqxfl (uid: 858379f7-fc23-4eef-adb2-478bc45e29e9)", boundByController: true
I0907 20:35:27.687348       1 pv_controller.go:922] updating PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: already bound to "azuredisk-9336/pvc-k94wm"
I0907 20:35:27.687352       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is bound to claim azuredisk-9336/pvc-dqxfl
I0907 20:35:27.687357       1 pv_controller.go:858] updating PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: set phase Bound
I0907 20:35:27.687366       1 pv_controller.go:861] updating PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase Bound already set
I0907 20:35:27.687373       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl not found
I0907 20:35:27.687375       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-9336/pvc-k94wm]: binding to "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2"
... skipping 16 lines ...
I0907 20:35:32.859495       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9
I0907 20:35:32.859541       1 pv_controller.go:1435] volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" deleted
I0907 20:35:32.859553       1 pv_controller.go:1283] deleteVolumeOperation [pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: success
I0907 20:35:32.866884       1 pv_protection_controller.go:205] Got event on PV pvc-858379f7-fc23-4eef-adb2-478bc45e29e9
I0907 20:35:32.867495       1 pv_protection_controller.go:125] Processing PV pvc-858379f7-fc23-4eef-adb2-478bc45e29e9
I0907 20:35:32.867955       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-858379f7-fc23-4eef-adb2-478bc45e29e9" with version 2310
I0907 20:35:32.868141       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: phase: Failed, bound to: "azuredisk-9336/pvc-dqxfl (uid: 858379f7-fc23-4eef-adb2-478bc45e29e9)", boundByController: true
I0907 20:35:32.868319       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: volume is bound to claim azuredisk-9336/pvc-dqxfl
I0907 20:35:32.868487       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: claim azuredisk-9336/pvc-dqxfl not found
I0907 20:35:32.868631       1 pv_controller.go:1108] reclaimVolume[pvc-858379f7-fc23-4eef-adb2-478bc45e29e9]: policy is Delete
I0907 20:35:32.868776       1 pv_controller.go:1752] scheduleOperation[delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]]
I0907 20:35:32.868910       1 pv_controller.go:1763] operation "delete-pvc-858379f7-fc23-4eef-adb2-478bc45e29e9[c9faff41-9676-4ce7-b445-ca6a3b867e0e]" is already running, skipping
I0907 20:35:32.872942       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-858379f7-fc23-4eef-adb2-478bc45e29e9
... skipping 141 lines ...
I0907 20:36:08.030168       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]]
I0907 20:36:08.030182       1 pv_controller.go:1763] operation "delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]" is already running, skipping
I0907 20:36:08.030267       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2] started
I0907 20:36:08.030571       1 pv_protection_controller.go:205] Got event on PV pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2
I0907 20:36:08.035234       1 pv_controller.go:1340] isVolumeReleased[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is released
I0907 20:36:08.035389       1 pv_controller.go:1404] doDeleteVolume [pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]
I0907 20:36:08.064608       1 pv_controller.go:1259] deletion of volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:36:08.064634       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: set phase Failed
I0907 20:36:08.064644       1 pv_controller.go:858] updating PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: set phase Failed
I0907 20:36:08.069448       1 pv_protection_controller.go:205] Got event on PV pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2
I0907 20:36:08.070070       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" with version 2375
I0907 20:36:08.070410       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase: Failed, bound to: "azuredisk-9336/pvc-k94wm (uid: ba7665f8-2dc1-4c5b-b289-e8d56b79eda2)", boundByController: true
I0907 20:36:08.070655       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is bound to claim azuredisk-9336/pvc-k94wm
I0907 20:36:08.070799       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: claim azuredisk-9336/pvc-k94wm not found
I0907 20:36:08.070819       1 pv_controller.go:1108] reclaimVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: policy is Delete
I0907 20:36:08.070851       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]]
I0907 20:36:08.070877       1 pv_controller.go:1763] operation "delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]" is already running, skipping
I0907 20:36:08.071183       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" with version 2375
I0907 20:36:08.071210       1 pv_controller.go:879] volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" entered phase "Failed"
I0907 20:36:08.071253       1 pv_controller.go:901] volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:36:08.071332       1 goroutinemap.go:150] Operation for "delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]" failed. No retries permitted until 2022-09-07 20:36:08.571279464 +0000 UTC m=+788.772238214 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:36:08.071657       1 event.go:291] "Event occurred" object="pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:36:11.064368       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:36:11.064402       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:36:11.132702       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-r4w9v" succeeded. VolumesAttached: []
I0907 20:36:11.133736       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2") on node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:36:11.133362       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
... skipping 5 lines ...
I0907 20:36:12.579543       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:36:12.599135       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:36:12.659994       1 gc_controller.go:161] GC'ing orphaned
I0907 20:36:12.660031       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:36:12.688838       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:36:12.688907       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" with version 2375
I0907 20:36:12.688956       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase: Failed, bound to: "azuredisk-9336/pvc-k94wm (uid: ba7665f8-2dc1-4c5b-b289-e8d56b79eda2)", boundByController: true
I0907 20:36:12.688996       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is bound to claim azuredisk-9336/pvc-k94wm
I0907 20:36:12.689022       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: claim azuredisk-9336/pvc-k94wm not found
I0907 20:36:12.689034       1 pv_controller.go:1108] reclaimVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: policy is Delete
I0907 20:36:12.689052       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]]
I0907 20:36:12.689088       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2] started
I0907 20:36:12.691838       1 pv_controller.go:1340] isVolumeReleased[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is released
I0907 20:36:12.691860       1 pv_controller.go:1404] doDeleteVolume [pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]
I0907 20:36:12.691920       1 pv_controller.go:1259] deletion of volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) since it's in attaching or detaching state
I0907 20:36:12.691940       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: set phase Failed
I0907 20:36:12.691952       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase Failed already set
E0907 20:36:12.692004       1 goroutinemap.go:150] Operation for "delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]" failed. No retries permitted until 2022-09-07 20:36:13.691982301 +0000 UTC m=+793.892940951 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) since it's in attaching or detaching state
I0907 20:36:12.798931       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-r4w9v ReadyCondition updated. Updating timestamp.
I0907 20:36:12.843937       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.206µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:32826" resp=200
I0907 20:36:13.452893       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:36:13.567937       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0907 20:36:20.578228       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0907 20:36:22.844493       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="82.808µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55296" resp=200
... skipping 2 lines ...
I0907 20:36:26.561147       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2) succeeded
I0907 20:36:26.561159       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2 was detached from node:capz-yx2tsa-md-0-r4w9v
I0907 20:36:26.561185       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2") on node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:36:27.600198       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:36:27.689158       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:36:27.689510       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" with version 2375
I0907 20:36:27.689712       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase: Failed, bound to: "azuredisk-9336/pvc-k94wm (uid: ba7665f8-2dc1-4c5b-b289-e8d56b79eda2)", boundByController: true
I0907 20:36:27.689868       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is bound to claim azuredisk-9336/pvc-k94wm
I0907 20:36:27.689920       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: claim azuredisk-9336/pvc-k94wm not found
I0907 20:36:27.689938       1 pv_controller.go:1108] reclaimVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: policy is Delete
I0907 20:36:27.689998       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]]
I0907 20:36:27.690260       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2] started
I0907 20:36:27.695568       1 pv_controller.go:1340] isVolumeReleased[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is released
... skipping 10 lines ...
I0907 20:36:32.921172       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2
I0907 20:36:32.921206       1 pv_controller.go:1435] volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" deleted
I0907 20:36:32.921220       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: success
I0907 20:36:32.927671       1 pv_protection_controller.go:205] Got event on PV pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2
I0907 20:36:32.927895       1 pv_protection_controller.go:125] Processing PV pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2
I0907 20:36:32.928444       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" with version 2413
I0907 20:36:32.928697       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: phase: Failed, bound to: "azuredisk-9336/pvc-k94wm (uid: ba7665f8-2dc1-4c5b-b289-e8d56b79eda2)", boundByController: true
I0907 20:36:32.928862       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: volume is bound to claim azuredisk-9336/pvc-k94wm
I0907 20:36:32.929019       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: claim azuredisk-9336/pvc-k94wm not found
I0907 20:36:32.929289       1 pv_controller.go:1108] reclaimVolume[pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2]: policy is Delete
I0907 20:36:32.929474       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]]
I0907 20:36:32.929638       1 pv_controller.go:1763] operation "delete-pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2[7cbf1df6-62c0-4110-87af-e92528537783]" is already running, skipping
I0907 20:36:32.932453       1 pv_controller_base.go:235] volume "pvc-ba7665f8-2dc1-4c5b-b289-e8d56b79eda2" deleted
... skipping 28 lines ...
I0907 20:36:37.994874       1 pvc_protection_controller.go:159] "Finished processing PVC" PVC="azuredisk-2205/pvc-cfclr" duration="6.901µs"
I0907 20:36:37.995172       1 disruption.go:415] addPod called on pod "azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt"
I0907 20:36:37.995463       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt, PodDisruptionBudget controller will avoid syncing.
I0907 20:36:37.995788       1 disruption.go:418] No matching pdb for pod "azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt"
I0907 20:36:37.996041       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-2205/azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt"
I0907 20:36:37.997092       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx" duration="23.67307ms"
I0907 20:36:37.997400       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-6tlvx\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:36:37.997660       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx" startTime="2022-09-07 20:36:37.997601932 +0000 UTC m=+818.198560582"
I0907 20:36:37.998258       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-6tlvx" timed out (false) [last progress check: 2022-09-07 20:36:37 +0000 UTC - now: 2022-09-07 20:36:37.998252592 +0000 UTC m=+818.199211342]
I0907 20:36:37.999948       1 controller_utils.go:581] Controller azuredisk-volume-tester-6tlvx-cb46bb4bf created pod azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt
I0907 20:36:38.000128       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-6tlvx-cb46bb4bf, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0907 20:36:38.000642       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-6tlvx-cb46bb4bf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt"
I0907 20:36:38.002269       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2205/pvc-cfclr" with version 2438
... skipping 121 lines ...
I0907 20:36:41.126353       1 azure_controller_standard.go:93] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - attach disk(capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) with DiskEncryptionSetID()
I0907 20:36:42.168464       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:36:42.359045       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9336
I0907 20:36:42.400653       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9336, name kube-root-ca.crt, uid 7745d91e-1a9f-4515-b0b9-1cc9c8d30b7d, event type delete
I0907 20:36:42.405435       1 publisher.go:186] Finished syncing namespace "azuredisk-9336" (4.738356ms)
I0907 20:36:42.423323       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9336, name default-token-fsnvf, uid 26f504ac-3dfd-4fea-8ffc-f6d992fa905f, event type delete
E0907 20:36:42.436843       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9336/default: secrets "default-token-7mj42" is forbidden: unable to create new content in namespace azuredisk-9336 because it is being terminated
I0907 20:36:42.485665       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-cdhh6.1712ae3d8dbc83fe, uid a8eb311d-5dcd-4a2e-b436-3bae980b2b56, event type delete
I0907 20:36:42.489601       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-cdhh6.1712ae3ffaebbb3e, uid e622e70f-ca15-4f0b-ae2d-a98e40f8e6bb, event type delete
I0907 20:36:42.493359       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-cdhh6.1712ae407349c873, uid aaa25ede-1b2a-41c7-9def-fc2eb801411a, event type delete
I0907 20:36:42.496984       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-cdhh6.1712ae4075bc43ac, uid d9511eea-d566-462d-97f9-050d99bb766b, event type delete
I0907 20:36:42.501093       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-cdhh6.1712ae407bfd110c, uid 0d4373d1-4cb5-4f30-ba56-f016bc373fa4, event type delete
I0907 20:36:42.504221       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9336, name azuredisk-volume-tester-cdhh6.1712ae41000e5af4, uid fcbc515a-ed71-4c8a-8d7c-995c5b9699e5, event type delete
... skipping 135 lines ...
I0907 20:36:56.907454       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-6tlvx-cb46bb4bf-z2xxw"
I0907 20:36:56.912281       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx" duration="15.826988ms"
I0907 20:36:56.912456       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx"
I0907 20:36:56.912706       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx" startTime="2022-09-07 20:36:56.91250948 +0000 UTC m=+837.113468230"
I0907 20:36:56.913146       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-6tlvx" for a progress check after 597s
I0907 20:36:56.913262       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-6tlvx" duration="735.769µs"
W0907 20:36:56.929877       1 reconciler.go:385] Multi-Attach error for volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df") from node "capz-yx2tsa-md-0-r4w9v" Volume is already used by pods azuredisk-2205/azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt on node capz-yx2tsa-md-0-dtt5p
I0907 20:36:56.930092       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-6tlvx-cb46bb4bf-z2xxw" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df\" Volume is already used by pod(s) azuredisk-volume-tester-6tlvx-cb46bb4bf-89rrt"
I0907 20:36:57.602002       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:36:57.690103       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:36:57.690206       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" with version 2449
I0907 20:36:57.690288       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: phase: Bound, bound to: "azuredisk-2205/pvc-cfclr (uid: 93eaf0a6-95e3-4156-940f-b4cb809bb0df)", boundByController: true
I0907 20:36:57.690349       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is bound to claim azuredisk-2205/pvc-cfclr
I0907 20:36:57.690382       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: claim azuredisk-2205/pvc-cfclr found: phase: Bound, bound to: "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df", bindCompleted: true, boundByController: true
... skipping 413 lines ...
I0907 20:38:31.316834       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: claim azuredisk-2205/pvc-cfclr not found
I0907 20:38:31.316901       1 pv_controller.go:1108] reclaimVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: policy is Delete
I0907 20:38:31.316920       1 pv_controller.go:1752] scheduleOperation[delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]]
I0907 20:38:31.316930       1 pv_controller.go:1763] operation "delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]" is already running, skipping
I0907 20:38:31.319011       1 pv_controller.go:1340] isVolumeReleased[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is released
I0907 20:38:31.319034       1 pv_controller.go:1404] doDeleteVolume [pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]
I0907 20:38:31.356420       1 pv_controller.go:1259] deletion of volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:38:31.356508       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: set phase Failed
I0907 20:38:31.356521       1 pv_controller.go:858] updating PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: set phase Failed
I0907 20:38:31.357313       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df on node "capz-yx2tsa-md-0-r4w9v"
I0907 20:38:31.362068       1 pv_protection_controller.go:205] Got event on PV pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df
I0907 20:38:31.362327       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" with version 2700
I0907 20:38:31.362475       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: phase: Failed, bound to: "azuredisk-2205/pvc-cfclr (uid: 93eaf0a6-95e3-4156-940f-b4cb809bb0df)", boundByController: true
I0907 20:38:31.362670       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is bound to claim azuredisk-2205/pvc-cfclr
I0907 20:38:31.362796       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: claim azuredisk-2205/pvc-cfclr not found
I0907 20:38:31.362907       1 pv_controller.go:1108] reclaimVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: policy is Delete
I0907 20:38:31.362931       1 pv_controller.go:1752] scheduleOperation[delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]]
I0907 20:38:31.362940       1 pv_controller.go:1763] operation "delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]" is already running, skipping
I0907 20:38:31.363110       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" with version 2700
I0907 20:38:31.363134       1 pv_controller.go:879] volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" entered phase "Failed"
I0907 20:38:31.363144       1 pv_controller.go:901] volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:38:31.363183       1 goroutinemap.go:150] Operation for "delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]" failed. No retries permitted until 2022-09-07 20:38:31.863165549 +0000 UTC m=+932.064124299 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:38:31.363397       1 event.go:291] "Event occurred" object="pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:38:31.374419       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-yx2tsa-md-0-r4w9v" succeeded. VolumesAttached: []
I0907 20:38:31.374508       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df") on node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:38:31.375854       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:38:31.376047       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:38:31.378367       1 operation_generator.go:1599] Verified volume is safe to detach for volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df") on node "capz-yx2tsa-md-0-r4w9v" 
... skipping 7 lines ...
I0907 20:38:32.842716       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="167.616µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36146" resp=200
I0907 20:38:35.077967       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 12 items received
I0907 20:38:42.581897       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:42.606271       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:42.696878       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:42.696959       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" with version 2700
I0907 20:38:42.697000       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: phase: Failed, bound to: "azuredisk-2205/pvc-cfclr (uid: 93eaf0a6-95e3-4156-940f-b4cb809bb0df)", boundByController: true
I0907 20:38:42.697042       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is bound to claim azuredisk-2205/pvc-cfclr
I0907 20:38:42.697070       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: claim azuredisk-2205/pvc-cfclr not found
I0907 20:38:42.697082       1 pv_controller.go:1108] reclaimVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: policy is Delete
I0907 20:38:42.697099       1 pv_controller.go:1752] scheduleOperation[delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]]
I0907 20:38:42.697142       1 pv_controller.go:1231] deleteVolumeOperation [pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df] started
I0907 20:38:42.705667       1 pv_controller.go:1340] isVolumeReleased[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is released
I0907 20:38:42.705691       1 pv_controller.go:1404] doDeleteVolume [pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]
I0907 20:38:42.705785       1 pv_controller.go:1259] deletion of volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) since it's in attaching or detaching state
I0907 20:38:42.705801       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: set phase Failed
I0907 20:38:42.705813       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: phase Failed already set
E0907 20:38:42.705843       1 goroutinemap.go:150] Operation for "delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]" failed. No retries permitted until 2022-09-07 20:38:43.705823047 +0000 UTC m=+943.906781797 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) since it's in attaching or detaching state
I0907 20:38:42.842793       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.906µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51682" resp=200
I0907 20:38:43.466284       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:38:43.583138       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 53 items received
I0907 20:38:43.590805       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:38:44.685932       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:38:46.810782       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df) returned with <nil>
... skipping 4 lines ...
I0907 20:38:52.664775       1 gc_controller.go:161] GC'ing orphaned
I0907 20:38:52.664810       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:38:52.843605       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.806µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44342" resp=200
I0907 20:38:57.606448       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:57.697032       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:57.697112       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" with version 2700
I0907 20:38:57.697153       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: phase: Failed, bound to: "azuredisk-2205/pvc-cfclr (uid: 93eaf0a6-95e3-4156-940f-b4cb809bb0df)", boundByController: true
I0907 20:38:57.697196       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is bound to claim azuredisk-2205/pvc-cfclr
I0907 20:38:57.697224       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: claim azuredisk-2205/pvc-cfclr not found
I0907 20:38:57.697233       1 pv_controller.go:1108] reclaimVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: policy is Delete
I0907 20:38:57.697251       1 pv_controller.go:1752] scheduleOperation[delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]]
I0907 20:38:57.697291       1 pv_controller.go:1231] deleteVolumeOperation [pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df] started
I0907 20:38:57.709602       1 pv_controller.go:1340] isVolumeReleased[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is released
... skipping 3 lines ...
I0907 20:39:02.844136       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="78.107µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34990" resp=200
I0907 20:39:02.907775       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df
I0907 20:39:02.908115       1 pv_controller.go:1435] volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" deleted
I0907 20:39:02.908135       1 pv_controller.go:1283] deleteVolumeOperation [pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: success
I0907 20:39:02.913970       1 pv_protection_controller.go:205] Got event on PV pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df
I0907 20:39:02.914232       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df" with version 2747
I0907 20:39:02.914427       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: phase: Failed, bound to: "azuredisk-2205/pvc-cfclr (uid: 93eaf0a6-95e3-4156-940f-b4cb809bb0df)", boundByController: true
I0907 20:39:02.914382       1 pv_protection_controller.go:125] Processing PV pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df
I0907 20:39:02.914663       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: volume is bound to claim azuredisk-2205/pvc-cfclr
I0907 20:39:02.914840       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: claim azuredisk-2205/pvc-cfclr not found
I0907 20:39:02.915009       1 pv_controller.go:1108] reclaimVolume[pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df]: policy is Delete
I0907 20:39:02.915175       1 pv_controller.go:1752] scheduleOperation[delete-pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df[9df5c05c-6d5b-405f-8176-d9756cacda96]]
I0907 20:39:02.915374       1 pv_controller.go:1231] deleteVolumeOperation [pvc-93eaf0a6-95e3-4156-940f-b4cb809bb0df] started
... skipping 473 lines ...
I0907 20:39:25.655903       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-1387/pvc-ltwpj] status: set phase Bound
I0907 20:39:25.656249       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-ltwpj] status: phase Bound already set
I0907 20:39:25.656513       1 pv_controller.go:1038] volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" bound to claim "azuredisk-1387/pvc-ltwpj"
I0907 20:39:25.656795       1 pv_controller.go:1039] volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:39:25.657278       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-ltwpj" status after binding: phase: Bound, bound to: "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa", bindCompleted: true, boundByController: true
I0907 20:39:25.703272       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-lst5b, uid f9193df1-b9b2-4df1-a92f-a41b3da32aab, event type delete
E0907 20:39:25.720254       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-kzzkv" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0907 20:39:25.722561       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid f7b3db8b-53b5-42bf-96c2-c9a79e04a56a, event type delete
I0907 20:39:25.722620       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0907 20:39:25.722656       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (1.801µs)
I0907 20:39:25.741810       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-kzfr4.1712ae909bbebd76, uid c029795f-8862-47c0-b3d6-ac74783a2acd, event type delete
I0907 20:39:25.766180       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 2a07750a-62e9-4144-a238-84b5bc390fb8, event type delete
I0907 20:39:25.769655       1 publisher.go:186] Finished syncing namespace "azuredisk-3410" (3.430023ms)
... skipping 14 lines ...
I0907 20:39:26.222785       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-14901273-395d-4dd3-8400-3a56cd53c390. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-14901273-395d-4dd3-8400-3a56cd53c390" to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:39:26.223656       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:39:26.223903       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443" to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:39:26.256501       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443" lun 0 to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:39:26.258787       1 azure_controller_standard.go:93] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - attach disk(capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443) with DiskEncryptionSetID()
I0907 20:39:26.272333       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-xqncf, uid 1a880c63-406a-43f4-9d1a-083cc7528d63, event type delete
E0907 20:39:26.306533       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-vk47n" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0907 20:39:26.318663       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8582, name kube-root-ca.crt, uid f2683a33-c38f-410a-b645-77e12841b403, event type delete
I0907 20:39:26.321207       1 publisher.go:186] Finished syncing namespace "azuredisk-8582" (2.490935ms)
I0907 20:39:26.327303       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8582/default), service account deleted, removing tokens
I0907 20:39:26.327371       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8582, name default, uid f1906605-afbb-4077-bf07-cece72dba044, event type delete
I0907 20:39:26.327404       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (2.201µs)
I0907 20:39:26.427786       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8582, estimate: 0, errors: <nil>
I0907 20:39:26.428283       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8582" (2.9µs)
I0907 20:39:26.438712       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8582" (233.361025ms)
I0907 20:39:26.774250       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7726
I0907 20:39:26.867507       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7726, name kube-root-ca.crt, uid 98c2e234-207b-4f50-8532-1dd258c58517, event type delete
I0907 20:39:26.869699       1 publisher.go:186] Finished syncing namespace "azuredisk-7726" (1.90488ms)
I0907 20:39:26.898852       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7726, name default-token-xsx6k, uid 20e635d6-871e-42d8-8f60-efdb5b9792c5, event type delete
E0907 20:39:26.912244       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7726/default: secrets "default-token-gjpkg" is forbidden: unable to create new content in namespace azuredisk-7726 because it is being terminated
I0907 20:39:26.912460       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7726, name default, uid 24580653-3fc0-4c7d-afd9-fc4f5cefbd14, event type delete
I0907 20:39:26.912563       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7726/default), service account deleted, removing tokens
I0907 20:39:26.912466       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (3.4µs)
I0907 20:39:26.928292       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7726, estimate: 0, errors: <nil>
I0907 20:39:26.928528       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7726" (2.8µs)
I0907 20:39:26.939195       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7726" (169.341452ms)
I0907 20:39:27.322200       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3086
I0907 20:39:27.433473       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-jnlwj, uid c457ed04-fc97-4380-9273-abc67cba3e63, event type delete
I0907 20:39:27.448155       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 0e92bc35-2a7b-4b50-9b4d-876a906f63c0, event type delete
I0907 20:39:27.448206       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2.2µs)
E0907 20:39:27.450945       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-2vlfc" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0907 20:39:27.451225       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0907 20:39:27.498680       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid 1cb36aa0-fee3-4006-b761-126cd5ecc97d, event type delete
I0907 20:39:27.501880       1 publisher.go:186] Finished syncing namespace "azuredisk-3086" (3.149397ms)
I0907 20:39:27.507325       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0907 20:39:27.507539       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (3.4µs)
I0907 20:39:27.515552       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (196.92365ms)
... skipping 391 lines ...
I0907 20:39:59.941587       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]]
I0907 20:39:59.941717       1 pv_controller.go:1763] operation "delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]" is already running, skipping
I0907 20:39:59.940871       1 pv_protection_controller.go:205] Got event on PV pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa
I0907 20:39:59.941191       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa] started
I0907 20:39:59.943589       1 pv_controller.go:1340] isVolumeReleased[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is released
I0907 20:39:59.943750       1 pv_controller.go:1404] doDeleteVolume [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]
I0907 20:39:59.980952       1 pv_controller.go:1259] deletion of volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:39:59.980985       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: set phase Failed
I0907 20:39:59.980995       1 pv_controller.go:858] updating PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: set phase Failed
I0907 20:39:59.984862       1 pv_protection_controller.go:205] Got event on PV pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa
I0907 20:39:59.985072       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 2985
I0907 20:39:59.985119       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase: Failed, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:39:59.985210       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is bound to claim azuredisk-1387/pvc-ltwpj
I0907 20:39:59.985241       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: claim azuredisk-1387/pvc-ltwpj not found
I0907 20:39:59.985250       1 pv_controller.go:1108] reclaimVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: policy is Delete
I0907 20:39:59.985267       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]]
I0907 20:39:59.985276       1 pv_controller.go:1763] operation "delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]" is already running, skipping
I0907 20:39:59.985712       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 2985
I0907 20:39:59.985748       1 pv_controller.go:879] volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" entered phase "Failed"
I0907 20:39:59.985782       1 pv_controller.go:901] volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
E0907 20:39:59.985960       1 goroutinemap.go:150] Operation for "delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]" failed. No retries permitted until 2022-09-07 20:40:00.485815065 +0000 UTC m=+1020.686773815 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:39:59.986353       1 event.go:291] "Event occurred" object="pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted"
I0907 20:40:00.001052       1 secrets.go:73] Expired bootstrap token in kube-system/bootstrap-token-tud862 Secret: 2022-09-07T20:40:00Z
I0907 20:40:00.001086       1 tokencleaner.go:194] Deleting expired secret kube-system/bootstrap-token-tud862
I0907 20:40:00.012303       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-tud862" (11.270261ms)
I0907 20:40:00.012504       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-tud862, uid 82182d3b-59ad-451f-98b7-84abed7b212f, event type delete
I0907 20:40:02.831687       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-dtt5p ReadyCondition updated. Updating timestamp.
... skipping 15 lines ...
I0907 20:40:12.701020       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: volume is bound to claim azuredisk-1387/pvc-2d5mc
I0907 20:40:12.701042       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: claim azuredisk-1387/pvc-2d5mc found: phase: Bound, bound to: "pvc-fda7411d-bc13-474e-bf36-0aa72f61f443", bindCompleted: true, boundByController: true
I0907 20:40:12.701057       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: all is bound
I0907 20:40:12.701067       1 pv_controller.go:858] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: set phase Bound
I0907 20:40:12.701078       1 pv_controller.go:861] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: phase Bound already set
I0907 20:40:12.701096       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 2985
I0907 20:40:12.701116       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase: Failed, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:40:12.701140       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is bound to claim azuredisk-1387/pvc-ltwpj
I0907 20:40:12.701161       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: claim azuredisk-1387/pvc-ltwpj not found
I0907 20:40:12.701179       1 pv_controller.go:1108] reclaimVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: policy is Delete
I0907 20:40:12.701199       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]]
I0907 20:40:12.701234       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa] started
I0907 20:40:12.701638       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-2d5mc" with version 2873
... skipping 27 lines ...
I0907 20:40:12.702088       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5] status: phase Bound already set
I0907 20:40:12.702099       1 pv_controller.go:1038] volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" bound to claim "azuredisk-1387/pvc-jbhx5"
I0907 20:40:12.702128       1 pv_controller.go:1039] volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-jbhx5 (uid: 14901273-395d-4dd3-8400-3a56cd53c390)", boundByController: true
I0907 20:40:12.702143       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-jbhx5" status after binding: phase: Bound, bound to: "pvc-14901273-395d-4dd3-8400-3a56cd53c390", bindCompleted: true, boundByController: true
I0907 20:40:12.707300       1 pv_controller.go:1340] isVolumeReleased[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is released
I0907 20:40:12.707321       1 pv_controller.go:1404] doDeleteVolume [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]
I0907 20:40:12.769470       1 pv_controller.go:1259] deletion of volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:40:12.769495       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: set phase Failed
I0907 20:40:12.769506       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase Failed already set
E0907 20:40:12.769535       1 goroutinemap.go:150] Operation for "delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]" failed. No retries permitted until 2022-09-07 20:40:13.769515877 +0000 UTC m=+1033.970474627 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:40:12.842709       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.607µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49984" resp=200
I0907 20:40:13.647899       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:40:14.582739       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443) returned with <nil>
I0907 20:40:14.582789       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443) succeeded
I0907 20:40:14.582800       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443 was detached from node:capz-yx2tsa-md-0-dtt5p
I0907 20:40:14.582828       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-fda7411d-bc13-474e-bf36-0aa72f61f443" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-fda7411d-bc13-474e-bf36-0aa72f61f443") on node "capz-yx2tsa-md-0-dtt5p" 
... skipping 48 lines ...
I0907 20:40:27.701442       1 pv_controller.go:858] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: set phase Bound
I0907 20:40:27.701448       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5]: binding to "pvc-14901273-395d-4dd3-8400-3a56cd53c390"
I0907 20:40:27.701451       1 pv_controller.go:861] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: phase Bound already set
I0907 20:40:27.701463       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 2985
I0907 20:40:27.701469       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5]: already bound to "pvc-14901273-395d-4dd3-8400-3a56cd53c390"
I0907 20:40:27.701478       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5] status: set phase Bound
I0907 20:40:27.701483       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase: Failed, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:40:27.701496       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5] status: phase Bound already set
I0907 20:40:27.701505       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is bound to claim azuredisk-1387/pvc-ltwpj
I0907 20:40:27.701509       1 pv_controller.go:1038] volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" bound to claim "azuredisk-1387/pvc-jbhx5"
I0907 20:40:27.701525       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: claim azuredisk-1387/pvc-ltwpj not found
I0907 20:40:27.701528       1 pv_controller.go:1039] volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-jbhx5 (uid: 14901273-395d-4dd3-8400-3a56cd53c390)", boundByController: true
I0907 20:40:27.701534       1 pv_controller.go:1108] reclaimVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: policy is Delete
I0907 20:40:27.701543       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-jbhx5" status after binding: phase: Bound, bound to: "pvc-14901273-395d-4dd3-8400-3a56cd53c390", bindCompleted: true, boundByController: true
I0907 20:40:27.701549       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]]
I0907 20:40:27.701583       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa] started
I0907 20:40:27.706071       1 pv_controller.go:1340] isVolumeReleased[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is released
I0907 20:40:27.706092       1 pv_controller.go:1404] doDeleteVolume [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]
I0907 20:40:27.730228       1 pv_controller.go:1259] deletion of volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:40:27.730264       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: set phase Failed
I0907 20:40:27.730279       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase Failed already set
E0907 20:40:27.730325       1 goroutinemap.go:150] Operation for "delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]" failed. No retries permitted until 2022-09-07 20:40:29.730292828 +0000 UTC m=+1049.931251578 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-dtt5p), could not be deleted
I0907 20:40:28.157404       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0907 20:40:30.009529       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-14901273-395d-4dd3-8400-3a56cd53c390) returned with <nil>
I0907 20:40:30.009579       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-14901273-395d-4dd3-8400-3a56cd53c390) succeeded
I0907 20:40:30.009613       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-14901273-395d-4dd3-8400-3a56cd53c390 was detached from node:capz-yx2tsa-md-0-dtt5p
I0907 20:40:30.009689       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-14901273-395d-4dd3-8400-3a56cd53c390") on node "capz-yx2tsa-md-0-dtt5p" 
I0907 20:40:30.043336       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa"
... skipping 42 lines ...
I0907 20:40:42.702331       1 pv_controller.go:1012] binding volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" to claim "azuredisk-1387/pvc-jbhx5"
I0907 20:40:42.702338       1 pv_controller.go:861] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: phase Bound already set
I0907 20:40:42.702342       1 pv_controller.go:910] updating PersistentVolume[pvc-14901273-395d-4dd3-8400-3a56cd53c390]: binding to "azuredisk-1387/pvc-jbhx5"
I0907 20:40:42.702351       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 2985
I0907 20:40:42.702357       1 pv_controller.go:922] updating PersistentVolume[pvc-14901273-395d-4dd3-8400-3a56cd53c390]: already bound to "azuredisk-1387/pvc-jbhx5"
I0907 20:40:42.702365       1 pv_controller.go:858] updating PersistentVolume[pvc-14901273-395d-4dd3-8400-3a56cd53c390]: set phase Bound
I0907 20:40:42.702373       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase: Failed, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:40:42.702375       1 pv_controller.go:861] updating PersistentVolume[pvc-14901273-395d-4dd3-8400-3a56cd53c390]: phase Bound already set
I0907 20:40:42.702383       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5]: binding to "pvc-14901273-395d-4dd3-8400-3a56cd53c390"
I0907 20:40:42.702395       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is bound to claim azuredisk-1387/pvc-ltwpj
I0907 20:40:42.702414       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: claim azuredisk-1387/pvc-ltwpj not found
I0907 20:40:42.702403       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-1387/pvc-jbhx5]: already bound to "pvc-14901273-395d-4dd3-8400-3a56cd53c390"
I0907 20:40:42.702422       1 pv_controller.go:1108] reclaimVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: policy is Delete
... skipping 3 lines ...
I0907 20:40:42.702458       1 pv_controller.go:1038] volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" bound to claim "azuredisk-1387/pvc-jbhx5"
I0907 20:40:42.702468       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa] started
I0907 20:40:42.702476       1 pv_controller.go:1039] volume "pvc-14901273-395d-4dd3-8400-3a56cd53c390" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-jbhx5 (uid: 14901273-395d-4dd3-8400-3a56cd53c390)", boundByController: true
I0907 20:40:42.702490       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-jbhx5" status after binding: phase: Bound, bound to: "pvc-14901273-395d-4dd3-8400-3a56cd53c390", bindCompleted: true, boundByController: true
I0907 20:40:42.707811       1 pv_controller.go:1340] isVolumeReleased[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is released
I0907 20:40:42.707835       1 pv_controller.go:1404] doDeleteVolume [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]
I0907 20:40:42.707873       1 pv_controller.go:1259] deletion of volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) since it's in attaching or detaching state
I0907 20:40:42.707889       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: set phase Failed
I0907 20:40:42.707926       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase Failed already set
E0907 20:40:42.707960       1 goroutinemap.go:150] Operation for "delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]" failed. No retries permitted until 2022-09-07 20:40:46.707938533 +0000 UTC m=+1066.908897183 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) since it's in attaching or detaching state
I0907 20:40:42.843736       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="76.808µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35258" resp=200
I0907 20:40:43.674924       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:40:45.519630       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) returned with <nil>
I0907 20:40:45.519670       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa) succeeded
I0907 20:40:45.519681       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa was detached from node:capz-yx2tsa-md-0-dtt5p
I0907 20:40:45.519706       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa") on node "capz-yx2tsa-md-0-dtt5p" 
... skipping 23 lines ...
I0907 20:40:57.702972       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: all is bound
I0907 20:40:57.702978       1 pv_controller.go:858] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: set phase Bound
I0907 20:40:57.702988       1 pv_controller.go:861] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: phase Bound already set
I0907 20:40:57.702930       1 pv_controller.go:922] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: already bound to "azuredisk-1387/pvc-2d5mc"
I0907 20:40:57.702999       1 pv_controller.go:858] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: set phase Bound
I0907 20:40:57.703000       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 2985
I0907 20:40:57.703036       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase: Failed, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:40:57.703054       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is bound to claim azuredisk-1387/pvc-ltwpj
I0907 20:40:57.703066       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: claim azuredisk-1387/pvc-ltwpj not found
I0907 20:40:57.703071       1 pv_controller.go:1108] reclaimVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: policy is Delete
I0907 20:40:57.703084       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]]
I0907 20:40:57.703008       1 pv_controller.go:861] updating PersistentVolume[pvc-fda7411d-bc13-474e-bf36-0aa72f61f443]: phase Bound already set
I0907 20:40:57.703108       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-1387/pvc-2d5mc]: binding to "pvc-fda7411d-bc13-474e-bf36-0aa72f61f443"
... skipping 28 lines ...
I0907 20:41:02.903493       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa
I0907 20:41:02.903525       1 pv_controller.go:1435] volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" deleted
I0907 20:41:02.903539       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: success
I0907 20:41:02.912538       1 pv_protection_controller.go:205] Got event on PV pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa
I0907 20:41:02.912568       1 pv_protection_controller.go:125] Processing PV pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa
I0907 20:41:02.912992       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" with version 3077
I0907 20:41:02.913023       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: phase: Failed, bound to: "azuredisk-1387/pvc-ltwpj (uid: c9c4020e-a9ed-46b3-b85a-aa41743901fa)", boundByController: true
I0907 20:41:02.913070       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: volume is bound to claim azuredisk-1387/pvc-ltwpj
I0907 20:41:02.913085       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: claim azuredisk-1387/pvc-ltwpj not found
I0907 20:41:02.913091       1 pv_controller.go:1108] reclaimVolume[pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa]: policy is Delete
I0907 20:41:02.913103       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa[01bdb006-a440-41ac-a89b-297828a8cbf8]]
I0907 20:41:02.913125       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa] started
I0907 20:41:02.922524       1 pv_controller.go:1243] Volume "pvc-c9c4020e-a9ed-46b3-b85a-aa41743901fa" is already being deleted
... skipping 628 lines ...
I0907 20:42:01.620227       1 pv_controller.go:1752] scheduleOperation[delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]]
I0907 20:42:01.620244       1 pv_controller.go:1763] operation "delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]" is already running, skipping
I0907 20:42:01.620164       1 pv_controller.go:1231] deleteVolumeOperation [pvc-39ce5944-6007-4bc5-8930-aa384aefb01b] started
I0907 20:42:01.619678       1 pv_protection_controller.go:205] Got event on PV pvc-39ce5944-6007-4bc5-8930-aa384aefb01b
I0907 20:42:01.622548       1 pv_controller.go:1340] isVolumeReleased[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is released
I0907 20:42:01.622566       1 pv_controller.go:1404] doDeleteVolume [pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]
I0907 20:42:01.658305       1 pv_controller.go:1259] deletion of volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:42:01.658330       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: set phase Failed
I0907 20:42:01.658339       1 pv_controller.go:858] updating PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: set phase Failed
I0907 20:42:01.662409       1 pv_protection_controller.go:205] Got event on PV pvc-39ce5944-6007-4bc5-8930-aa384aefb01b
I0907 20:42:01.662464       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" with version 3249
I0907 20:42:01.663298       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: phase: Failed, bound to: "azuredisk-4547/pvc-67psk (uid: 39ce5944-6007-4bc5-8930-aa384aefb01b)", boundByController: true
I0907 20:42:01.663555       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is bound to claim azuredisk-4547/pvc-67psk
I0907 20:42:01.663582       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: claim azuredisk-4547/pvc-67psk not found
I0907 20:42:01.663592       1 pv_controller.go:1108] reclaimVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: policy is Delete
I0907 20:42:01.663608       1 pv_controller.go:1752] scheduleOperation[delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]]
I0907 20:42:01.663624       1 pv_controller.go:1763] operation "delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]" is already running, skipping
I0907 20:42:01.663061       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" with version 3249
I0907 20:42:01.663651       1 pv_controller.go:879] volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" entered phase "Failed"
I0907 20:42:01.663667       1 pv_controller.go:901] volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:42:01.663709       1 goroutinemap.go:150] Operation for "delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]" failed. No retries permitted until 2022-09-07 20:42:02.163688246 +0000 UTC m=+1142.364646996 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:42:01.663974       1 event.go:291] "Event occurred" object="pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:42:02.842271       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="81.207µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58318" resp=200
I0907 20:42:02.849148       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-r4w9v ReadyCondition updated. Updating timestamp.
I0907 20:42:12.586712       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:42:12.616211       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:42:12.673078       1 gc_controller.go:161] GC'ing orphaned
... skipping 6 lines ...
I0907 20:42:12.705631       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: all is bound
I0907 20:42:12.705640       1 pv_controller.go:858] updating PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: set phase Bound
I0907 20:42:12.705649       1 pv_controller.go:861] updating PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: phase Bound already set
I0907 20:42:12.705625       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-4547/pvc-t28kc" with version 3151
I0907 20:42:12.705662       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" with version 3249
I0907 20:42:12.705667       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-4547/pvc-t28kc]: phase: Bound, bound to: "pvc-66c01598-af05-406d-8c8d-07690cbc6d00", bindCompleted: true, boundByController: true
I0907 20:42:12.705683       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: phase: Failed, bound to: "azuredisk-4547/pvc-67psk (uid: 39ce5944-6007-4bc5-8930-aa384aefb01b)", boundByController: true
I0907 20:42:12.705694       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-t28kc]: volume "pvc-66c01598-af05-406d-8c8d-07690cbc6d00" found: phase: Bound, bound to: "azuredisk-4547/pvc-t28kc (uid: 66c01598-af05-406d-8c8d-07690cbc6d00)", boundByController: true
I0907 20:42:12.705704       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-t28kc]: claim is already correctly bound
I0907 20:42:12.705704       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is bound to claim azuredisk-4547/pvc-67psk
I0907 20:42:12.705713       1 pv_controller.go:1012] binding volume "pvc-66c01598-af05-406d-8c8d-07690cbc6d00" to claim "azuredisk-4547/pvc-t28kc"
I0907 20:42:12.705724       1 pv_controller.go:910] updating PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: binding to "azuredisk-4547/pvc-t28kc"
I0907 20:42:12.705726       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: claim azuredisk-4547/pvc-67psk not found
... skipping 9 lines ...
I0907 20:42:12.705815       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4547/pvc-t28kc] status: phase Bound already set
I0907 20:42:12.705834       1 pv_controller.go:1038] volume "pvc-66c01598-af05-406d-8c8d-07690cbc6d00" bound to claim "azuredisk-4547/pvc-t28kc"
I0907 20:42:12.705850       1 pv_controller.go:1039] volume "pvc-66c01598-af05-406d-8c8d-07690cbc6d00" status after binding: phase: Bound, bound to: "azuredisk-4547/pvc-t28kc (uid: 66c01598-af05-406d-8c8d-07690cbc6d00)", boundByController: true
I0907 20:42:12.705864       1 pv_controller.go:1040] claim "azuredisk-4547/pvc-t28kc" status after binding: phase: Bound, bound to: "pvc-66c01598-af05-406d-8c8d-07690cbc6d00", bindCompleted: true, boundByController: true
I0907 20:42:12.709424       1 pv_controller.go:1340] isVolumeReleased[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is released
I0907 20:42:12.709444       1 pv_controller.go:1404] doDeleteVolume [pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]
I0907 20:42:12.736009       1 pv_controller.go:1259] deletion of volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:42:12.736131       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: set phase Failed
I0907 20:42:12.736217       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: phase Failed already set
E0907 20:42:12.736326       1 goroutinemap.go:150] Operation for "delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]" failed. No retries permitted until 2022-09-07 20:42:13.736302038 +0000 UTC m=+1153.937260688 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:42:12.843820       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="80.207µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51618" resp=200
I0907 20:42:13.743426       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:42:16.840297       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-66c01598-af05-406d-8c8d-07690cbc6d00) returned with <nil>
I0907 20:42:16.840356       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-66c01598-af05-406d-8c8d-07690cbc6d00) succeeded
I0907 20:42:16.840588       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-66c01598-af05-406d-8c8d-07690cbc6d00 was detached from node:capz-yx2tsa-md-0-r4w9v
I0907 20:42:16.840736       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-66c01598-af05-406d-8c8d-07690cbc6d00" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-66c01598-af05-406d-8c8d-07690cbc6d00") on node "capz-yx2tsa-md-0-r4w9v" 
... skipping 27 lines ...
I0907 20:42:27.706555       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: volume is bound to claim azuredisk-4547/pvc-t28kc
I0907 20:42:27.706599       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: claim azuredisk-4547/pvc-t28kc found: phase: Bound, bound to: "pvc-66c01598-af05-406d-8c8d-07690cbc6d00", bindCompleted: true, boundByController: true
I0907 20:42:27.706621       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: all is bound
I0907 20:42:27.706630       1 pv_controller.go:858] updating PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: set phase Bound
I0907 20:42:27.706641       1 pv_controller.go:861] updating PersistentVolume[pvc-66c01598-af05-406d-8c8d-07690cbc6d00]: phase Bound already set
I0907 20:42:27.706679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" with version 3249
I0907 20:42:27.706726       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: phase: Failed, bound to: "azuredisk-4547/pvc-67psk (uid: 39ce5944-6007-4bc5-8930-aa384aefb01b)", boundByController: true
I0907 20:42:27.706756       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is bound to claim azuredisk-4547/pvc-67psk
I0907 20:42:27.706778       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: claim azuredisk-4547/pvc-67psk not found
I0907 20:42:27.706813       1 pv_controller.go:1108] reclaimVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: policy is Delete
I0907 20:42:27.706836       1 pv_controller.go:1752] scheduleOperation[delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]]
I0907 20:42:27.706908       1 pv_controller.go:1231] deleteVolumeOperation [pvc-39ce5944-6007-4bc5-8930-aa384aefb01b] started
I0907 20:42:27.711954       1 pv_controller.go:1340] isVolumeReleased[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is released
... skipping 4 lines ...
I0907 20:42:32.931403       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b
I0907 20:42:32.931456       1 pv_controller.go:1435] volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" deleted
I0907 20:42:32.931478       1 pv_controller.go:1283] deleteVolumeOperation [pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: success
I0907 20:42:32.942976       1 pv_protection_controller.go:205] Got event on PV pvc-39ce5944-6007-4bc5-8930-aa384aefb01b
I0907 20:42:32.943473       1 pv_protection_controller.go:125] Processing PV pvc-39ce5944-6007-4bc5-8930-aa384aefb01b
I0907 20:42:32.943434       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" with version 3296
I0907 20:42:32.944381       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: phase: Failed, bound to: "azuredisk-4547/pvc-67psk (uid: 39ce5944-6007-4bc5-8930-aa384aefb01b)", boundByController: true
I0907 20:42:32.944685       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: volume is bound to claim azuredisk-4547/pvc-67psk
I0907 20:42:32.944864       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: claim azuredisk-4547/pvc-67psk not found
I0907 20:42:32.945098       1 pv_controller.go:1108] reclaimVolume[pvc-39ce5944-6007-4bc5-8930-aa384aefb01b]: policy is Delete
I0907 20:42:32.945232       1 pv_controller.go:1752] scheduleOperation[delete-pvc-39ce5944-6007-4bc5-8930-aa384aefb01b[9a0bc8ad-5b04-4265-9dae-67ec8ce75749]]
I0907 20:42:32.945463       1 pv_controller.go:1231] deleteVolumeOperation [pvc-39ce5944-6007-4bc5-8930-aa384aefb01b] started
I0907 20:42:32.950108       1 pv_controller.go:1243] Volume "pvc-39ce5944-6007-4bc5-8930-aa384aefb01b" is already being deleted
... skipping 347 lines ...
I0907 20:42:52.234353       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-ghsgd"
I0907 20:42:52.299466       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4547
I0907 20:42:52.315820       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6") from node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:42:52.316306       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c") from node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:42:52.316798       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e") from node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:42:52.322179       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4547, name default-token-4qv5q, uid c50b1ecc-80ea-49e9-97e5-7ca112af64d3, event type delete
E0907 20:42:52.338086       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4547/default: secrets "default-token-9bxdb" is forbidden: unable to create new content in namespace azuredisk-4547 because it is being terminated
I0907 20:42:52.357015       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-t6fzl.1712aeb17e0b7a6f, uid aa3f43be-5d16-4205-9327-0934d7424bc2, event type delete
I0907 20:42:52.361120       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-t6fzl.1712aeb3f1148450, uid 0285685d-41aa-48eb-96eb-95ac2afd6d23, event type delete
I0907 20:42:52.362332       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" to node "capz-yx2tsa-md-0-r4w9v".
I0907 20:42:52.362387       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" to node "capz-yx2tsa-md-0-r4w9v".
I0907 20:42:52.362784       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c" to node "capz-yx2tsa-md-0-r4w9v".
I0907 20:42:52.369703       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-t6fzl.1712aeb466ca4be9, uid abe8f5c0-9332-48cb-95d4-a8640055510c, event type delete
... skipping 20 lines ...
I0907 20:42:52.674437       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:42:52.843494       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="77.407µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54266" resp=200
I0907 20:42:52.875741       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0907 20:42:52.937630       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid 1ebfeb08-247a-489e-af9b-89ab6a0e53bc, event type delete
I0907 20:42:52.942128       1 publisher.go:186] Finished syncing namespace "azuredisk-7051" (4.440217ms)
I0907 20:42:52.981116       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-hjrxz, uid 1fea22a3-6335-4529-ba41-54016cb43509, event type delete
E0907 20:42:53.001128       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-9nth6" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0907 20:42:53.018441       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0907 20:42:53.018694       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid 24c7385d-de4e-410c-abd0-23562fdb9d4a, event type delete
I0907 20:42:53.018753       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (2.9µs)
I0907 20:42:53.080562       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7051, estimate: 0, errors: <nil>
I0907 20:42:53.080793       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (3.001µs)
I0907 20:42:53.102205       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7051" (231.555829ms)
... skipping 299 lines ...
I0907 20:43:26.050376       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
I0907 20:43:26.050478       1 pv_controller.go:1108] reclaimVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: policy is Delete
I0907 20:43:26.050585       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]]
I0907 20:43:26.050678       1 pv_controller.go:1763] operation "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" is already running, skipping
I0907 20:43:26.058177       1 pv_controller.go:1340] isVolumeReleased[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is released
I0907 20:43:26.058204       1 pv_controller.go:1404] doDeleteVolume [pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]
I0907 20:43:26.092652       1 pv_controller.go:1259] deletion of volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:26.092684       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: set phase Failed
I0907 20:43:26.092693       1 pv_controller.go:858] updating PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: set phase Failed
I0907 20:43:26.097952       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:43:26.098277       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:43:26.098393       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:43:26.098485       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is bound to claim azuredisk-7578/pvc-4jc76
I0907 20:43:26.098592       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
I0907 20:43:26.098657       1 pv_controller.go:1108] reclaimVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: policy is Delete
I0907 20:43:26.098685       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]]
I0907 20:43:26.098699       1 pv_controller.go:1763] operation "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" is already running, skipping
I0907 20:43:26.098105       1 pv_protection_controller.go:205] Got event on PV pvc-f6de6d37-7bc7-499a-8a04-f794058e238e
I0907 20:43:26.099049       1 pv_controller.go:879] volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" entered phase "Failed"
I0907 20:43:26.099086       1 pv_controller.go:901] volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
E0907 20:43:26.099158       1 goroutinemap.go:150] Operation for "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" failed. No retries permitted until 2022-09-07 20:43:26.599129251 +0000 UTC m=+1226.800088001 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:26.099415       1 event.go:291] "Event occurred" object="pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted"
I0907 20:43:27.619368       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:43:27.708415       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:43:27.708519       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" with version 3366
I0907 20:43:27.708562       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase: Bound, bound to: "azuredisk-7578/pvc-dl68v (uid: 86da4b73-4093-4b74-be6a-0c78d37622d6)", boundByController: true
I0907 20:43:27.708602       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: volume is bound to claim azuredisk-7578/pvc-dl68v
I0907 20:43:27.708627       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: claim azuredisk-7578/pvc-dl68v found: phase: Bound, bound to: "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6", bindCompleted: true, boundByController: true
I0907 20:43:27.708641       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: all is bound
I0907 20:43:27.708650       1 pv_controller.go:858] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: set phase Bound
I0907 20:43:27.708654       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-dl68v" with version 3368
I0907 20:43:27.708660       1 pv_controller.go:861] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase Bound already set
I0907 20:43:27.708672       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: phase: Bound, bound to: "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6", bindCompleted: true, boundByController: true
I0907 20:43:27.708674       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:43:27.708695       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:43:27.708702       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" found: phase: Bound, bound to: "azuredisk-7578/pvc-dl68v (uid: 86da4b73-4093-4b74-be6a-0c78d37622d6)", boundByController: true
I0907 20:43:27.708712       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: claim is already correctly bound
I0907 20:43:27.708715       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is bound to claim azuredisk-7578/pvc-4jc76
I0907 20:43:27.708722       1 pv_controller.go:1012] binding volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" to claim "azuredisk-7578/pvc-dl68v"
I0907 20:43:27.708732       1 pv_controller.go:910] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: binding to "azuredisk-7578/pvc-dl68v"
I0907 20:43:27.708735       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
... skipping 32 lines ...
I0907 20:43:27.709090       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: claim azuredisk-7578/pvc-mzwnd found: phase: Bound, bound to: "pvc-1a51a108-c242-4f27-b418-5c745650f31c", bindCompleted: true, boundByController: true
I0907 20:43:27.709110       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: all is bound
I0907 20:43:27.709119       1 pv_controller.go:858] updating PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: set phase Bound
I0907 20:43:27.709130       1 pv_controller.go:861] updating PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: phase Bound already set
I0907 20:43:27.713276       1 pv_controller.go:1340] isVolumeReleased[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is released
I0907 20:43:27.713298       1 pv_controller.go:1404] doDeleteVolume [pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]
I0907 20:43:27.741894       1 pv_controller.go:1259] deletion of volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:27.741921       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: set phase Failed
I0907 20:43:27.741932       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase Failed already set
E0907 20:43:27.741983       1 goroutinemap.go:150] Operation for "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" failed. No retries permitted until 2022-09-07 20:43:28.741941239 +0000 UTC m=+1228.942899989 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:31.477457       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:43:31.477497       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:43:31.477508       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:43:31.477518       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6 to the node "capz-yx2tsa-md-0-r4w9v" mounted false
I0907 20:43:31.529700       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-md-0-r4w9v"
I0907 20:43:31.529741       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e to the node "capz-yx2tsa-md-0-r4w9v" mounted false
... skipping 40 lines ...
I0907 20:43:42.710101       1 pv_controller.go:858] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: set phase Bound
I0907 20:43:42.709885       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: claim is already correctly bound
I0907 20:43:42.710203       1 pv_controller.go:1012] binding volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" to claim "azuredisk-7578/pvc-dl68v"
I0907 20:43:42.710299       1 pv_controller.go:910] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: binding to "azuredisk-7578/pvc-dl68v"
I0907 20:43:42.710180       1 pv_controller.go:861] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase Bound already set
I0907 20:43:42.710534       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:43:42.710565       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:43:42.710407       1 pv_controller.go:922] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: already bound to "azuredisk-7578/pvc-dl68v"
I0907 20:43:42.710649       1 pv_controller.go:858] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: set phase Bound
I0907 20:43:42.710716       1 pv_controller.go:861] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase Bound already set
I0907 20:43:42.710806       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: binding to "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6"
I0907 20:43:42.710928       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: already bound to "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6"
I0907 20:43:42.710947       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-7578/pvc-dl68v] status: set phase Bound
... skipping 28 lines ...
I0907 20:43:42.714812       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-mzwnd] status: phase Bound already set
I0907 20:43:42.714844       1 pv_controller.go:1038] volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" bound to claim "azuredisk-7578/pvc-mzwnd"
I0907 20:43:42.714881       1 pv_controller.go:1039] volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-mzwnd (uid: 1a51a108-c242-4f27-b418-5c745650f31c)", boundByController: true
I0907 20:43:42.714912       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-mzwnd" status after binding: phase: Bound, bound to: "pvc-1a51a108-c242-4f27-b418-5c745650f31c", bindCompleted: true, boundByController: true
I0907 20:43:42.721952       1 pv_controller.go:1340] isVolumeReleased[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is released
I0907 20:43:42.721974       1 pv_controller.go:1404] doDeleteVolume [pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]
I0907 20:43:42.744807       1 pv_controller.go:1259] deletion of volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:42.744836       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: set phase Failed
I0907 20:43:42.744846       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase Failed already set
E0907 20:43:42.744876       1 goroutinemap.go:150] Operation for "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" failed. No retries permitted until 2022-09-07 20:43:44.744855427 +0000 UTC m=+1244.945814177 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:42.842905       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="84.308µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58212" resp=200
I0907 20:43:43.809868       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:43:46.958114       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c) returned with <nil>
I0907 20:43:46.958165       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c) succeeded
I0907 20:43:46.958175       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c was detached from node:capz-yx2tsa-md-0-r4w9v
I0907 20:43:46.958230       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-1a51a108-c242-4f27-b418-5c745650f31c") on node "capz-yx2tsa-md-0-r4w9v" 
... skipping 6 lines ...
I0907 20:43:54.629060       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 2 items received
I0907 20:43:55.597792       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0907 20:43:56.583228       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 8 items received
I0907 20:43:57.619602       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:43:57.709479       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:43:57.709571       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:43:57.709617       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:43:57.709654       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is bound to claim azuredisk-7578/pvc-4jc76
I0907 20:43:57.709683       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
I0907 20:43:57.709696       1 pv_controller.go:1108] reclaimVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: policy is Delete
I0907 20:43:57.709713       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]]
I0907 20:43:57.709737       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" with version 3375
I0907 20:43:57.709765       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: phase: Bound, bound to: "azuredisk-7578/pvc-mzwnd (uid: 1a51a108-c242-4f27-b418-5c745650f31c)", boundByController: true
... skipping 41 lines ...
I0907 20:43:57.712048       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-mzwnd] status: phase Bound already set
I0907 20:43:57.712060       1 pv_controller.go:1038] volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" bound to claim "azuredisk-7578/pvc-mzwnd"
I0907 20:43:57.712077       1 pv_controller.go:1039] volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-mzwnd (uid: 1a51a108-c242-4f27-b418-5c745650f31c)", boundByController: true
I0907 20:43:57.712093       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-mzwnd" status after binding: phase: Bound, bound to: "pvc-1a51a108-c242-4f27-b418-5c745650f31c", bindCompleted: true, boundByController: true
I0907 20:43:57.717378       1 pv_controller.go:1340] isVolumeReleased[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is released
I0907 20:43:57.717401       1 pv_controller.go:1404] doDeleteVolume [pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]
I0907 20:43:57.740808       1 pv_controller.go:1259] deletion of volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:57.740835       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: set phase Failed
I0907 20:43:57.740846       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase Failed already set
E0907 20:43:57.740876       1 goroutinemap.go:150] Operation for "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" failed. No retries permitted until 2022-09-07 20:44:01.740856541 +0000 UTC m=+1261.941815191 (durationBeforeRetry 4s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/virtualMachines/capz-yx2tsa-md-0-r4w9v), could not be deleted
I0907 20:43:58.084876       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 34 items received
I0907 20:44:02.292834       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6) returned with <nil>
I0907 20:44:02.292904       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6) succeeded
I0907 20:44:02.292916       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6 was detached from node:capz-yx2tsa-md-0-r4w9v
I0907 20:44:02.292942       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-86da4b73-4093-4b74-be6a-0c78d37622d6") on node "capz-yx2tsa-md-0-r4w9v" 
I0907 20:44:02.326943       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e"
... skipping 45 lines ...
I0907 20:44:12.711436       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: volume is bound to claim azuredisk-7578/pvc-dl68v
I0907 20:44:12.711519       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: claim azuredisk-7578/pvc-dl68v found: phase: Bound, bound to: "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6", bindCompleted: true, boundByController: true
I0907 20:44:12.711576       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: all is bound
I0907 20:44:12.711610       1 pv_controller.go:858] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: set phase Bound
I0907 20:44:12.711658       1 pv_controller.go:861] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase Bound already set
I0907 20:44:12.711715       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:44:12.711803       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:44:12.711849       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is bound to claim azuredisk-7578/pvc-4jc76
I0907 20:44:12.711923       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
I0907 20:44:12.711939       1 pv_controller.go:1108] reclaimVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: policy is Delete
I0907 20:44:12.711969       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]]
I0907 20:44:12.712007       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a51a108-c242-4f27-b418-5c745650f31c" with version 3375
I0907 20:44:12.712114       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: phase: Bound, bound to: "azuredisk-7578/pvc-mzwnd (uid: 1a51a108-c242-4f27-b418-5c745650f31c)", boundByController: true
... skipping 2 lines ...
I0907 20:44:12.712532       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: claim azuredisk-7578/pvc-mzwnd found: phase: Bound, bound to: "pvc-1a51a108-c242-4f27-b418-5c745650f31c", bindCompleted: true, boundByController: true
I0907 20:44:12.712554       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: all is bound
I0907 20:44:12.712561       1 pv_controller.go:858] updating PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: set phase Bound
I0907 20:44:12.712570       1 pv_controller.go:861] updating PersistentVolume[pvc-1a51a108-c242-4f27-b418-5c745650f31c]: phase Bound already set
I0907 20:44:12.717349       1 pv_controller.go:1340] isVolumeReleased[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is released
I0907 20:44:12.717368       1 pv_controller.go:1404] doDeleteVolume [pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]
I0907 20:44:12.717441       1 pv_controller.go:1259] deletion of volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) since it's in attaching or detaching state
I0907 20:44:12.717458       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: set phase Failed
I0907 20:44:12.717470       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase Failed already set
E0907 20:44:12.717535       1 goroutinemap.go:150] Operation for "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" failed. No retries permitted until 2022-09-07 20:44:20.717514019 +0000 UTC m=+1280.918472969 (durationBeforeRetry 8s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) since it's in attaching or detaching state
I0907 20:44:12.843335       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="79.507µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58668" resp=200
I0907 20:44:13.829754       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:44:15.381546       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-yx2tsa-control-plane-jxjcc"
I0907 20:44:17.706675       1 azure_controller_standard.go:184] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-r4w9v) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) returned with <nil>
I0907 20:44:17.706717       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e) succeeded
I0907 20:44:17.706728       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e was detached from node:capz-yx2tsa-md-0-r4w9v
... skipping 13 lines ...
I0907 20:44:27.710649       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" found: phase: Bound, bound to: "azuredisk-7578/pvc-dl68v (uid: 86da4b73-4093-4b74-be6a-0c78d37622d6)", boundByController: true
I0907 20:44:27.710666       1 pv_controller.go:861] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase Bound already set
I0907 20:44:27.710670       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: claim is already correctly bound
I0907 20:44:27.710687       1 pv_controller.go:1012] binding volume "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6" to claim "azuredisk-7578/pvc-dl68v"
I0907 20:44:27.710690       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3483
I0907 20:44:27.710704       1 pv_controller.go:910] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: binding to "azuredisk-7578/pvc-dl68v"
I0907 20:44:27.710725       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:44:27.710734       1 pv_controller.go:922] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: already bound to "azuredisk-7578/pvc-dl68v"
I0907 20:44:27.710748       1 pv_controller.go:858] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: set phase Bound
I0907 20:44:27.710759       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is bound to claim azuredisk-7578/pvc-4jc76
I0907 20:44:27.710765       1 pv_controller.go:861] updating PersistentVolume[pvc-86da4b73-4093-4b74-be6a-0c78d37622d6]: phase Bound already set
I0907 20:44:27.710780       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-7578/pvc-dl68v]: binding to "pvc-86da4b73-4093-4b74-be6a-0c78d37622d6"
I0907 20:44:27.710795       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
... skipping 38 lines ...
I0907 20:44:32.884575       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e
I0907 20:44:32.884613       1 pv_controller.go:1435] volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" deleted
I0907 20:44:32.884626       1 pv_controller.go:1283] deleteVolumeOperation [pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: success
I0907 20:44:32.893069       1 pv_protection_controller.go:205] Got event on PV pvc-f6de6d37-7bc7-499a-8a04-f794058e238e
I0907 20:44:32.893118       1 pv_protection_controller.go:125] Processing PV pvc-f6de6d37-7bc7-499a-8a04-f794058e238e
I0907 20:44:32.893357       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" with version 3583
I0907 20:44:32.893672       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: phase: Failed, bound to: "azuredisk-7578/pvc-4jc76 (uid: f6de6d37-7bc7-499a-8a04-f794058e238e)", boundByController: true
I0907 20:44:32.893748       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: volume is bound to claim azuredisk-7578/pvc-4jc76
I0907 20:44:32.893831       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: claim azuredisk-7578/pvc-4jc76 not found
I0907 20:44:32.893944       1 pv_controller.go:1108] reclaimVolume[pvc-f6de6d37-7bc7-499a-8a04-f794058e238e]: policy is Delete
I0907 20:44:32.893972       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]]
I0907 20:44:32.894037       1 pv_controller.go:1763] operation "delete-pvc-f6de6d37-7bc7-499a-8a04-f794058e238e[56e6ccbb-9f43-4e7e-8f45-aa4871006bec]" is already running, skipping
I0907 20:44:32.905029       1 pv_controller_base.go:235] volume "pvc-f6de6d37-7bc7-499a-8a04-f794058e238e" deleted
... skipping 300 lines ...
I0907 20:45:02.585442       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7578, estimate: 0, errors: <nil>
I0907 20:45:02.596447       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7578" (350.640602ms)
I0907 20:45:02.829667       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1968
I0907 20:45:02.847227       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.007µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35738" resp=200
I0907 20:45:02.868442       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1968, name default-token-pw26z, uid fae4f4d0-07f9-429e-a772-587bdda94ba9, event type delete
I0907 20:45:02.876949       1 node_lifecycle_controller.go:1047] Node capz-yx2tsa-md-0-dtt5p ReadyCondition updated. Updating timestamp.
E0907 20:45:02.882170       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1968/default: secrets "default-token-rrxmv" is forbidden: unable to create new content in namespace azuredisk-1968 because it is being terminated
I0907 20:45:02.914691       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1968, name kube-root-ca.crt, uid 15be6463-85e8-4301-a8bd-e9d6f73d45bd, event type delete
I0907 20:45:02.917537       1 publisher.go:186] Finished syncing namespace "azuredisk-1968" (2.795864ms)
I0907 20:45:02.969717       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1968/default), service account deleted, removing tokens
I0907 20:45:02.970243       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1968, name default, uid 6ff2a9c1-fc9e-4d32-adc7-bbafef7315bb, event type delete
I0907 20:45:02.970271       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1968" (5.001µs)
I0907 20:45:02.983322       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1968" (2.401µs)
... skipping 22 lines ...
I0907 20:45:03.563139       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4657" (172.496913ms)
I0907 20:45:03.575425       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 38 items received
I0907 20:45:03.969824       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1359
I0907 20:45:04.049346       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1359, name default-token-rgbqr, uid 2c986816-da84-471b-a84f-226804f55377, event type delete
I0907 20:45:04.058470       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1359, name kube-root-ca.crt, uid d2eb8ca5-0a90-4503-8486-e0e1d1e099f5, event type delete
I0907 20:45:04.060954       1 publisher.go:186] Finished syncing namespace "azuredisk-1359" (2.423729ms)
E0907 20:45:04.066670       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1359/default: secrets "default-token-f5qc5" is forbidden: unable to create new content in namespace azuredisk-1359 because it is being terminated
I0907 20:45:04.071978       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1359/default), service account deleted, removing tokens
I0907 20:45:04.072042       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1359, name default, uid 3b6b90a4-79ad-4ddb-9bc0-5f125c75000a, event type delete
I0907 20:45:04.072080       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (1.6µs)
I0907 20:45:04.135674       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1359, estimate: 0, errors: <nil>
I0907 20:45:04.136389       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (3.401µs)
I0907 20:45:04.145885       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1359" (179.328159ms)
I0907 20:45:04.546356       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-565
I0907 20:45:04.567902       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-565, name default-token-64mc6, uid 661a4d11-d47c-4737-8733-b21387cb89d0, event type delete
E0907 20:45:04.624425       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-565/default: secrets "default-token-wq422" is forbidden: unable to create new content in namespace azuredisk-565 because it is being terminated
I0907 20:45:04.689796       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-565, name kube-root-ca.crt, uid 622a7a39-48d5-466b-9f4a-fcf43f117a7e, event type delete
I0907 20:45:04.693121       1 publisher.go:186] Finished syncing namespace "azuredisk-565" (3.270909ms)
I0907 20:45:04.710643       1 tokens_controller.go:252] syncServiceAccount(azuredisk-565/default), service account deleted, removing tokens
I0907 20:45:04.710787       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-565, name default, uid b99bc430-eb86-42c8-8445-7224568fc2aa, event type delete
I0907 20:45:04.710887       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (2.7µs)
I0907 20:45:04.767891       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (2.601µs)
... skipping 440 lines ...
I0907 20:46:33.335749       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d74d5a7e-975a-49a9-84f5-a506e9d0dadf" lun 0 to node "capz-yx2tsa-md-0-dtt5p".
I0907 20:46:33.335800       1 azure_controller_standard.go:93] azureDisk - update(capz-yx2tsa): vm(capz-yx2tsa-md-0-dtt5p) - attach disk(capz-yx2tsa-dynamic-pvc-d74d5a7e-975a-49a9-84f5-a506e9d0dadf, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-yx2tsa/providers/Microsoft.Compute/disks/capz-yx2tsa-dynamic-pvc-d74d5a7e-975a-49a9-84f5-a506e9d0dadf) with DiskEncryptionSetID()
I0907 20:46:34.446802       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8666
I0907 20:46:34.472712       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8666, name kube-root-ca.crt, uid 1a23a3a2-7b14-47a6-a38a-1c8324051510, event type delete
I0907 20:46:34.474646       1 publisher.go:186] Finished syncing namespace "azuredisk-8666" (1.876985ms)
I0907 20:46:34.520672       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8666, name default-token-vq5f4, uid 31fb951f-2f24-4f77-9e2d-a96c853df985, event type delete
E0907 20:46:34.550571       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8666/default: secrets "default-token-g7bkr" is forbidden: unable to create new content in namespace azuredisk-8666 because it is being terminated
I0907 20:46:34.559427       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-n2td4.1712aee329d21ceb, uid dcfed83c-aa0e-4c66-9118-ed0268f4fbff, event type delete
I0907 20:46:34.564184       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-n2td4.1712aee59add413f, uid f92848cf-4870-47c2-965a-13335538be4c, event type delete
I0907 20:46:34.569190       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-n2td4.1712aee66341c963, uid 855f44e6-ff96-4b57-b0e8-e767123de58e, event type delete
I0907 20:46:34.578051       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-n2td4.1712aee66703327c, uid 70bc7f00-a547-489f-8bd5-9ab36fae968b, event type delete
I0907 20:46:34.592990       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-n2td4.1712aee66e247a1f, uid b5430cb2-6690-4cfc-b119-8920f00a13ad, event type delete
I0907 20:46:34.603591       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-n2td4.1712aee71b2036e8, uid 53dad2c3-e6ac-4d1b-b6c6-4f537eb647f9, event type delete
... skipping 373 lines ...
I0907 20:47:52.843203       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.606µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46196" resp=200
2022/09/07 20:47:53 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1248.792 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 38 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-8vqmd, container manager
STEP: Dumping workload cluster default/capz-yx2tsa logs
Sep  7 20:49:22.364: INFO: Collecting logs for Linux node capz-yx2tsa-control-plane-jxjcc in cluster capz-yx2tsa in namespace default

Sep  7 20:50:22.366: INFO: Collecting boot logs for AzureMachine capz-yx2tsa-control-plane-jxjcc

Failed to get logs for machine capz-yx2tsa-control-plane-p6x5s, cluster default/capz-yx2tsa: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 20:50:23.186: INFO: Collecting logs for Linux node capz-yx2tsa-md-0-r4w9v in cluster capz-yx2tsa in namespace default

Sep  7 20:51:23.187: INFO: Collecting boot logs for AzureMachine capz-yx2tsa-md-0-r4w9v

Failed to get logs for machine capz-yx2tsa-md-0-5975687cc4-fxl8q, cluster default/capz-yx2tsa: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 20:51:23.499: INFO: Collecting logs for Linux node capz-yx2tsa-md-0-dtt5p in cluster capz-yx2tsa in namespace default

Sep  7 20:52:23.500: INFO: Collecting boot logs for AzureMachine capz-yx2tsa-md-0-dtt5p

Failed to get logs for machine capz-yx2tsa-md-0-5975687cc4-kq2j2, cluster default/capz-yx2tsa: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-yx2tsa kube-system pod logs
STEP: Fetching kube-system pod logs took 492.13885ms
STEP: Dumping workload cluster default/capz-yx2tsa Azure activity log
STEP: Creating log watcher for controller kube-system/etcd-capz-yx2tsa-control-plane-jxjcc, container etcd
STEP: Collecting events for Pod kube-system/kube-proxy-2s4jq
STEP: Collecting events for Pod kube-system/etcd-capz-yx2tsa-control-plane-jxjcc
STEP: Collecting events for Pod kube-system/metrics-server-8c95fb79b-slm5t
STEP: failed to find events of Pod "etcd-capz-yx2tsa-control-plane-jxjcc"
STEP: Collecting events for Pod kube-system/calico-node-dzpmk
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-zdc5x, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-zdc5x
STEP: Creating log watcher for controller kube-system/calico-node-bt2d9, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-bt2d9
STEP: Creating log watcher for controller kube-system/calico-node-dzpmk, container calico-node
... skipping 8 lines ...
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-yx2tsa-control-plane-jxjcc, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-yx2tsa-control-plane-jxjcc
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-yx2tsa-control-plane-jxjcc
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-yx2tsa-control-plane-jxjcc
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-yx2tsa-control-plane-jxjcc, container kube-scheduler
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-vmsfz, container coredns
STEP: failed to find events of Pod "kube-apiserver-capz-yx2tsa-control-plane-jxjcc"
STEP: Creating log watcher for controller kube-system/kube-proxy-2s4jq, container kube-proxy
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-5pr9k
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-vmsfz
STEP: Creating log watcher for controller kube-system/metrics-server-8c95fb79b-slm5t, container metrics-server
STEP: failed to find events of Pod "kube-controller-manager-capz-yx2tsa-control-plane-jxjcc"
STEP: failed to find events of Pod "kube-scheduler-capz-yx2tsa-control-plane-jxjcc"
STEP: Fetching activity logs took 3.342195453s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-yx2tsa" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...