This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-02 20:09
Elapsed55m22s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 628 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 130 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-clawep-kubeconfig; do sleep 1; done"
capz-clawep-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-clawep-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-clawep-control-plane-65svl   NotReady   control-plane,master   2s    v1.22.14-rc.0.3+b89409c45e0dcb
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-clawep-control-plane-65svl condition met
node/capz-clawep-mp-0000000 condition met
... skipping 100 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  2 20:31:58.970: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.298 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  2 20:32:00.849: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-kn4z2" in namespace "azuredisk-1353" to be "Succeeded or Failed"
Sep  2 20:32:00.880: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 31.575133ms
Sep  2 20:32:02.913: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063974327s
Sep  2 20:32:04.946: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09738773s
Sep  2 20:32:06.979: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130287113s
Sep  2 20:32:09.011: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.16244735s
Sep  2 20:32:11.044: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.195758816s
... skipping 6 lines ...
Sep  2 20:32:25.275: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.426800328s
Sep  2 20:32:27.308: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 26.459083349s
Sep  2 20:32:29.343: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.493896699s
Sep  2 20:32:31.378: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Running", Reason="", readiness=false. Elapsed: 30.529135576s
Sep  2 20:32:33.413: INFO: Pod "azuredisk-volume-tester-kn4z2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.564138758s
STEP: Saw pod success
Sep  2 20:32:33.413: INFO: Pod "azuredisk-volume-tester-kn4z2" satisfied condition "Succeeded or Failed"
Sep  2 20:32:33.413: INFO: deleting Pod "azuredisk-1353"/"azuredisk-volume-tester-kn4z2"
Sep  2 20:32:33.459: INFO: Pod azuredisk-volume-tester-kn4z2 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-kn4z2 in namespace azuredisk-1353
STEP: validating provisioned PV
STEP: checking the PV
Sep  2 20:32:33.581: INFO: deleting PVC "azuredisk-1353"/"pvc-qhzsv"
Sep  2 20:32:33.581: INFO: Deleting PersistentVolumeClaim "pvc-qhzsv"
STEP: waiting for claim's PV "pvc-0aa46493-c66f-4355-8724-5869529b8f51" to be deleted
Sep  2 20:32:33.614: INFO: Waiting up to 10m0s for PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 to get deleted
Sep  2 20:32:33.645: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Released (31.016488ms)
Sep  2 20:32:38.681: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (5.067366214s)
Sep  2 20:32:43.717: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (10.10325666s)
Sep  2 20:32:48.749: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (15.135269903s)
Sep  2 20:32:53.781: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (20.167369353s)
Sep  2 20:32:58.814: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (25.199686749s)
Sep  2 20:33:03.847: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (30.233489829s)
Sep  2 20:33:08.885: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 found and phase=Failed (35.270732858s)
Sep  2 20:33:13.922: INFO: PersistentVolume pvc-0aa46493-c66f-4355-8724-5869529b8f51 was removed
Sep  2 20:33:13.922: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  2 20:33:13.954: INFO: Claim "azuredisk-1353" in namespace "pvc-qhzsv" doesn't exist in the system
Sep  2 20:33:13.954: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-pljvm
Sep  2 20:33:13.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  2 20:33:38.118: INFO: deleting Pod "azuredisk-1563"/"azuredisk-volume-tester-8pp78"
Sep  2 20:33:38.152: INFO: Error getting logs for pod azuredisk-volume-tester-8pp78: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-8pp78)
STEP: Deleting pod azuredisk-volume-tester-8pp78 in namespace azuredisk-1563
STEP: validating provisioned PV
STEP: checking the PV
Sep  2 20:33:38.249: INFO: deleting PVC "azuredisk-1563"/"pvc-25xpm"
Sep  2 20:33:38.249: INFO: Deleting PersistentVolumeClaim "pvc-25xpm"
STEP: waiting for claim's PV "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" to be deleted
... skipping 16 lines ...
Sep  2 20:34:53.854: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Bound (1m15.572390444s)
Sep  2 20:34:58.890: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Bound (1m20.607501932s)
Sep  2 20:35:03.922: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Bound (1m25.640279667s)
Sep  2 20:35:08.959: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Bound (1m30.677316762s)
Sep  2 20:35:13.997: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Bound (1m35.71448146s)
Sep  2 20:35:19.030: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Bound (1m40.748297732s)
Sep  2 20:35:24.067: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (1m45.785335572s)
Sep  2 20:35:29.101: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (1m50.819436551s)
Sep  2 20:35:34.136: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (1m55.854178442s)
Sep  2 20:35:39.169: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (2m0.886713883s)
Sep  2 20:35:44.202: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (2m5.920267417s)
Sep  2 20:35:49.237: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (2m10.95547085s)
Sep  2 20:35:54.270: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 found and phase=Failed (2m15.987921412s)
Sep  2 20:35:59.306: INFO: PersistentVolume pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 was removed
Sep  2 20:35:59.310: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Sep  2 20:35:59.343: INFO: Claim "azuredisk-1563" in namespace "pvc-25xpm" doesn't exist in the system
Sep  2 20:35:59.343: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-d96gm
Sep  2 20:35:59.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  2 20:36:00.153: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nt6fw" in namespace "azuredisk-7463" to be "Succeeded or Failed"
Sep  2 20:36:00.185: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 32.316951ms
Sep  2 20:36:02.218: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065182387s
Sep  2 20:36:04.252: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098750962s
Sep  2 20:36:06.286: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133407972s
Sep  2 20:36:08.321: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167720451s
Sep  2 20:36:10.355: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202327896s
Sep  2 20:36:12.390: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.236835939s
Sep  2 20:36:14.424: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.271367139s
Sep  2 20:36:16.461: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.307793305s
Sep  2 20:36:18.496: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.342761008s
Sep  2 20:36:20.531: INFO: Pod "azuredisk-volume-tester-nt6fw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.378124382s
STEP: Saw pod success
Sep  2 20:36:20.531: INFO: Pod "azuredisk-volume-tester-nt6fw" satisfied condition "Succeeded or Failed"
Sep  2 20:36:20.531: INFO: deleting Pod "azuredisk-7463"/"azuredisk-volume-tester-nt6fw"
Sep  2 20:36:20.576: INFO: Pod azuredisk-volume-tester-nt6fw has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-nt6fw in namespace azuredisk-7463
STEP: validating provisioned PV
STEP: checking the PV
Sep  2 20:36:20.692: INFO: deleting PVC "azuredisk-7463"/"pvc-6tzst"
Sep  2 20:36:20.692: INFO: Deleting PersistentVolumeClaim "pvc-6tzst"
STEP: waiting for claim's PV "pvc-4c087965-1007-4357-972d-d588a3c26878" to be deleted
Sep  2 20:36:20.732: INFO: Waiting up to 10m0s for PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 to get deleted
Sep  2 20:36:20.763: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Released (31.390328ms)
Sep  2 20:36:25.800: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (5.068223588s)
Sep  2 20:36:30.835: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (10.103550292s)
Sep  2 20:36:35.869: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (15.13731848s)
Sep  2 20:36:40.905: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (20.172980805s)
Sep  2 20:36:45.939: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (25.206879772s)
Sep  2 20:36:50.972: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (30.240299183s)
Sep  2 20:36:56.006: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 found and phase=Failed (35.27428839s)
Sep  2 20:37:01.042: INFO: PersistentVolume pvc-4c087965-1007-4357-972d-d588a3c26878 was removed
Sep  2 20:37:01.042: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7463 to be removed
Sep  2 20:37:01.074: INFO: Claim "azuredisk-7463" in namespace "pvc-6tzst" doesn't exist in the system
Sep  2 20:37:01.074: INFO: deleting StorageClass azuredisk-7463-kubernetes.io-azure-disk-dynamic-sc-kz84l
Sep  2 20:37:01.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-7463" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  2 20:37:01.859: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-d2stg" in namespace "azuredisk-9241" to be "Error status code"
Sep  2 20:37:01.891: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 31.907798ms
Sep  2 20:37:03.924: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065645956s
Sep  2 20:37:05.958: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099302795s
Sep  2 20:37:07.991: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13226557s
Sep  2 20:37:10.026: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167158913s
Sep  2 20:37:12.059: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.200046776s
Sep  2 20:37:14.092: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 12.232804061s
Sep  2 20:37:16.125: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 14.266686248s
Sep  2 20:37:18.158: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 16.299532494s
Sep  2 20:37:20.194: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 18.334797827s
Sep  2 20:37:22.228: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 20.369047073s
Sep  2 20:37:24.263: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Pending", Reason="", readiness=false. Elapsed: 22.40413303s
Sep  2 20:37:26.298: INFO: Pod "azuredisk-volume-tester-d2stg": Phase="Failed", Reason="", readiness=false. Elapsed: 24.43958179s
STEP: Saw pod failure
Sep  2 20:37:26.299: INFO: Pod "azuredisk-volume-tester-d2stg" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  2 20:37:26.334: INFO: deleting Pod "azuredisk-9241"/"azuredisk-volume-tester-d2stg"
Sep  2 20:37:26.369: INFO: Pod azuredisk-volume-tester-d2stg has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-d2stg in namespace azuredisk-9241
STEP: validating provisioned PV
STEP: checking the PV
Sep  2 20:37:26.475: INFO: deleting PVC "azuredisk-9241"/"pvc-7hvdh"
Sep  2 20:37:26.475: INFO: Deleting PersistentVolumeClaim "pvc-7hvdh"
STEP: waiting for claim's PV "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" to be deleted
Sep  2 20:37:26.511: INFO: Waiting up to 10m0s for PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b to get deleted
Sep  2 20:37:26.543: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Released (32.14797ms)
Sep  2 20:37:31.579: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Failed (5.068247854s)
Sep  2 20:37:36.616: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Failed (10.105305485s)
Sep  2 20:37:41.651: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Failed (15.139586567s)
Sep  2 20:37:46.686: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Failed (20.175220046s)
Sep  2 20:37:51.723: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Failed (25.21189662s)
Sep  2 20:37:56.760: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b found and phase=Failed (30.24937469s)
Sep  2 20:38:01.797: INFO: PersistentVolume pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b was removed
Sep  2 20:38:01.797: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9241 to be removed
Sep  2 20:38:01.829: INFO: Claim "azuredisk-9241" in namespace "pvc-7hvdh" doesn't exist in the system
Sep  2 20:38:01.829: INFO: deleting StorageClass azuredisk-9241-kubernetes.io-azure-disk-dynamic-sc-5v4m2
Sep  2 20:38:01.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9241" for this suite.
... skipping 53 lines ...
Sep  2 20:39:06.302: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Bound (5.066484582s)
Sep  2 20:39:11.337: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Bound (10.101529232s)
Sep  2 20:39:16.370: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Bound (15.134880905s)
Sep  2 20:39:21.407: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Bound (20.171957383s)
Sep  2 20:39:26.443: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Bound (25.207287404s)
Sep  2 20:39:31.480: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Bound (30.244182077s)
Sep  2 20:39:36.512: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (35.276389446s)
Sep  2 20:39:41.547: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (40.311980406s)
Sep  2 20:39:46.584: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (45.349032757s)
Sep  2 20:39:51.621: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (50.385844185s)
Sep  2 20:39:56.658: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (55.422233537s)
Sep  2 20:40:01.691: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (1m0.455180626s)
Sep  2 20:40:06.728: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (1m5.492309138s)
Sep  2 20:40:11.762: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 found and phase=Failed (1m10.527100906s)
Sep  2 20:40:16.799: INFO: PersistentVolume pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 was removed
Sep  2 20:40:16.799: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  2 20:40:16.832: INFO: Claim "azuredisk-9336" in namespace "pvc-xh2jg" doesn't exist in the system
Sep  2 20:40:16.832: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-pbtvl
Sep  2 20:40:16.865: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-bgt7g"
Sep  2 20:40:16.911: INFO: Pod azuredisk-volume-tester-bgt7g has the following logs: 
... skipping 8 lines ...
Sep  2 20:40:22.115: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Bound (5.068098639s)
Sep  2 20:40:27.148: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Bound (10.101655799s)
Sep  2 20:40:32.183: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Bound (15.13651633s)
Sep  2 20:40:37.219: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Bound (20.172099808s)
Sep  2 20:40:42.254: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Bound (25.207304692s)
Sep  2 20:40:47.290: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Bound (30.242857654s)
Sep  2 20:40:52.323: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Failed (35.27638937s)
Sep  2 20:40:57.360: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Failed (40.313248475s)
Sep  2 20:41:02.393: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Failed (45.346752558s)
Sep  2 20:41:07.426: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Failed (50.379585651s)
Sep  2 20:41:12.464: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc found and phase=Failed (55.417268951s)
Sep  2 20:41:17.498: INFO: PersistentVolume pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc was removed
Sep  2 20:41:17.498: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  2 20:41:17.530: INFO: Claim "azuredisk-9336" in namespace "pvc-2x4qj" doesn't exist in the system
Sep  2 20:41:17.530: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-z27dv
Sep  2 20:41:17.563: INFO: deleting Pod "azuredisk-9336"/"azuredisk-volume-tester-snvs7"
Sep  2 20:41:17.605: INFO: Pod azuredisk-volume-tester-snvs7 has the following logs: 
... skipping 8 lines ...
Sep  2 20:41:22.809: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Bound (5.066282011s)
Sep  2 20:41:27.843: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Bound (10.099879757s)
Sep  2 20:41:32.877: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Bound (15.134224848s)
Sep  2 20:41:37.914: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Bound (20.170874977s)
Sep  2 20:41:42.946: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Bound (25.203135439s)
Sep  2 20:41:47.982: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Bound (30.239339873s)
Sep  2 20:41:53.017: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Failed (35.2737343s)
Sep  2 20:41:58.049: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Failed (40.306318392s)
Sep  2 20:42:03.085: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Failed (45.341859559s)
Sep  2 20:42:08.117: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e found and phase=Failed (50.374396785s)
Sep  2 20:42:13.150: INFO: PersistentVolume pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e was removed
Sep  2 20:42:13.150: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9336 to be removed
Sep  2 20:42:13.181: INFO: Claim "azuredisk-9336" in namespace "pvc-tdqf2" doesn't exist in the system
Sep  2 20:42:13.181: INFO: deleting StorageClass azuredisk-9336-kubernetes.io-azure-disk-dynamic-sc-sn4gc
Sep  2 20:42:13.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9336" for this suite.
... skipping 59 lines ...
Sep  2 20:43:48.883: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Bound (5.063551281s)
Sep  2 20:43:53.917: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Bound (10.09747115s)
Sep  2 20:43:58.950: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Bound (15.129897019s)
Sep  2 20:44:03.986: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Bound (20.166116633s)
Sep  2 20:44:09.020: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Bound (25.200021689s)
Sep  2 20:44:14.052: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Bound (30.231980722s)
Sep  2 20:44:19.088: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Failed (35.267711827s)
Sep  2 20:44:24.120: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Failed (40.299746009s)
Sep  2 20:44:29.153: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Failed (45.332794025s)
Sep  2 20:44:34.188: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Failed (50.367691253s)
Sep  2 20:44:39.225: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 found and phase=Failed (55.404603892s)
Sep  2 20:44:44.260: INFO: PersistentVolume pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 was removed
Sep  2 20:44:44.260: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2205 to be removed
Sep  2 20:44:44.292: INFO: Claim "azuredisk-2205" in namespace "pvc-tpzm8" doesn't exist in the system
Sep  2 20:44:44.292: INFO: deleting StorageClass azuredisk-2205-kubernetes.io-azure-disk-dynamic-sc-9b7l9
Sep  2 20:44:44.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2205" for this suite.
... skipping 160 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  2 20:44:56.827: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rz7ph" in namespace "azuredisk-1387" to be "Succeeded or Failed"
Sep  2 20:44:56.860: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 32.748615ms
Sep  2 20:44:58.892: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064769156s
Sep  2 20:45:00.927: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099547152s
Sep  2 20:45:02.962: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134767508s
Sep  2 20:45:04.996: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168849133s
Sep  2 20:45:07.030: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202344212s
... skipping 10 lines ...
Sep  2 20:45:29.418: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 32.590609982s
Sep  2 20:45:31.452: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 34.625222431s
Sep  2 20:45:33.487: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 36.660182207s
Sep  2 20:45:35.522: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Pending", Reason="", readiness=false. Elapsed: 38.694557312s
Sep  2 20:45:37.556: INFO: Pod "azuredisk-volume-tester-rz7ph": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.729156021s
STEP: Saw pod success
Sep  2 20:45:37.556: INFO: Pod "azuredisk-volume-tester-rz7ph" satisfied condition "Succeeded or Failed"
Sep  2 20:45:37.556: INFO: deleting Pod "azuredisk-1387"/"azuredisk-volume-tester-rz7ph"
Sep  2 20:45:37.603: INFO: Pod azuredisk-volume-tester-rz7ph has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-rz7ph in namespace azuredisk-1387
STEP: validating provisioned PV
STEP: checking the PV
Sep  2 20:45:37.708: INFO: deleting PVC "azuredisk-1387"/"pvc-tk6zc"
Sep  2 20:45:37.708: INFO: Deleting PersistentVolumeClaim "pvc-tk6zc"
STEP: waiting for claim's PV "pvc-9571446a-3972-4090-a129-b8142098251d" to be deleted
Sep  2 20:45:37.741: INFO: Waiting up to 10m0s for PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d to get deleted
Sep  2 20:45:37.772: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Released (30.981331ms)
Sep  2 20:45:42.804: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (5.063262083s)
Sep  2 20:45:47.837: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (10.09636628s)
Sep  2 20:45:52.873: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (15.132675825s)
Sep  2 20:45:57.910: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (20.169017311s)
Sep  2 20:46:02.945: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (25.204306386s)
Sep  2 20:46:07.980: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (30.239025593s)
Sep  2 20:46:13.012: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (35.271090375s)
Sep  2 20:46:18.049: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (40.30811285s)
Sep  2 20:46:23.085: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (45.344123143s)
Sep  2 20:46:28.117: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (50.37647849s)
Sep  2 20:46:33.150: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (55.409016401s)
Sep  2 20:46:38.183: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d found and phase=Failed (1m0.441743178s)
Sep  2 20:46:43.219: INFO: PersistentVolume pvc-9571446a-3972-4090-a129-b8142098251d was removed
Sep  2 20:46:43.219: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1387 to be removed
Sep  2 20:46:43.251: INFO: Claim "azuredisk-1387" in namespace "pvc-tk6zc" doesn't exist in the system
Sep  2 20:46:43.252: INFO: deleting StorageClass azuredisk-1387-kubernetes.io-azure-disk-dynamic-sc-wl42p
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  2 20:47:04.543: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9bmx6" in namespace "azuredisk-4547" to be "Succeeded or Failed"
Sep  2 20:47:04.575: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 31.195619ms
Sep  2 20:47:06.607: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063272848s
Sep  2 20:47:08.639: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095425847s
Sep  2 20:47:10.672: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128235679s
Sep  2 20:47:12.704: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160747203s
Sep  2 20:47:14.738: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.194278164s
... skipping 5 lines ...
Sep  2 20:47:26.939: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 22.395298137s
Sep  2 20:47:28.971: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 24.427478344s
Sep  2 20:47:31.004: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 26.460835841s
Sep  2 20:47:33.040: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.496473024s
Sep  2 20:47:35.074: INFO: Pod "azuredisk-volume-tester-9bmx6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.53076148s
STEP: Saw pod success
Sep  2 20:47:35.074: INFO: Pod "azuredisk-volume-tester-9bmx6" satisfied condition "Succeeded or Failed"
Sep  2 20:47:35.074: INFO: deleting Pod "azuredisk-4547"/"azuredisk-volume-tester-9bmx6"
Sep  2 20:47:35.121: INFO: Pod azuredisk-volume-tester-9bmx6 has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.056528 seconds, 1.7GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep  2 20:47:35.225: INFO: deleting PVC "azuredisk-4547"/"pvc-8n4gm"
Sep  2 20:47:35.225: INFO: Deleting PersistentVolumeClaim "pvc-8n4gm"
STEP: waiting for claim's PV "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" to be deleted
Sep  2 20:47:35.258: INFO: Waiting up to 10m0s for PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba to get deleted
Sep  2 20:47:35.290: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Released (32.36159ms)
Sep  2 20:47:40.327: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (5.069409972s)
Sep  2 20:47:45.363: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (10.10465705s)
Sep  2 20:47:50.399: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (15.141320326s)
Sep  2 20:47:55.436: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (20.177704872s)
Sep  2 20:48:00.471: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (25.212829708s)
Sep  2 20:48:05.506: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (30.247821415s)
Sep  2 20:48:10.543: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (35.284559398s)
Sep  2 20:48:15.578: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (40.319812742s)
Sep  2 20:48:20.610: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (45.352259427s)
Sep  2 20:48:25.642: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba found and phase=Failed (50.384070505s)
Sep  2 20:48:30.674: INFO: PersistentVolume pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba was removed
Sep  2 20:48:30.674: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-4547 to be removed
Sep  2 20:48:30.706: INFO: Claim "azuredisk-4547" in namespace "pvc-8n4gm" doesn't exist in the system
Sep  2 20:48:30.706: INFO: deleting StorageClass azuredisk-4547-kubernetes.io-azure-disk-dynamic-sc-5hzkc
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  2 20:48:42.918: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-znxbc" in namespace "azuredisk-7578" to be "Succeeded or Failed"
Sep  2 20:48:42.950: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 32.533111ms
Sep  2 20:48:44.983: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065474968s
Sep  2 20:48:47.018: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09964499s
Sep  2 20:48:49.051: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.133501028s
Sep  2 20:48:51.085: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.167336644s
Sep  2 20:48:53.120: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202295834s
... skipping 11 lines ...
Sep  2 20:49:17.543: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 34.625522151s
Sep  2 20:49:19.576: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 36.658579453s
Sep  2 20:49:21.611: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Pending", Reason="", readiness=false. Elapsed: 38.692749213s
Sep  2 20:49:23.644: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Running", Reason="", readiness=false. Elapsed: 40.726180835s
Sep  2 20:49:25.679: INFO: Pod "azuredisk-volume-tester-znxbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 42.76081704s
STEP: Saw pod success
Sep  2 20:49:25.679: INFO: Pod "azuredisk-volume-tester-znxbc" satisfied condition "Succeeded or Failed"
Sep  2 20:49:25.679: INFO: deleting Pod "azuredisk-7578"/"azuredisk-volume-tester-znxbc"
Sep  2 20:49:25.720: INFO: Pod azuredisk-volume-tester-znxbc has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-znxbc in namespace azuredisk-7578
STEP: validating provisioned PV
STEP: checking the PV
Sep  2 20:49:25.825: INFO: deleting PVC "azuredisk-7578"/"pvc-4jszx"
Sep  2 20:49:25.825: INFO: Deleting PersistentVolumeClaim "pvc-4jszx"
STEP: waiting for claim's PV "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" to be deleted
Sep  2 20:49:25.859: INFO: Waiting up to 10m0s for PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 to get deleted
Sep  2 20:49:25.890: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Released (31.22543ms)
Sep  2 20:49:30.926: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (5.067369568s)
Sep  2 20:49:35.962: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (10.103615284s)
Sep  2 20:49:40.996: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (15.137363154s)
Sep  2 20:49:46.030: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (20.171587681s)
Sep  2 20:49:51.063: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (25.204504163s)
Sep  2 20:49:56.096: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (30.236910331s)
Sep  2 20:50:01.132: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (35.273018476s)
Sep  2 20:50:06.165: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (40.305934341s)
Sep  2 20:50:11.198: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (45.339559064s)
Sep  2 20:50:16.231: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (50.372456147s)
Sep  2 20:50:21.266: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (55.407043173s)
Sep  2 20:50:26.302: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 found and phase=Failed (1m0.443028166s)
Sep  2 20:50:31.335: INFO: PersistentVolume pvc-091dddad-5413-4a5a-b5e2-346aa248e093 was removed
Sep  2 20:50:31.335: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-7578 to be removed
Sep  2 20:50:31.367: INFO: Claim "azuredisk-7578" in namespace "pvc-4jszx" doesn't exist in the system
Sep  2 20:50:31.367: INFO: deleting StorageClass azuredisk-7578-kubernetes.io-azure-disk-dynamic-sc-55fr7
STEP: validating provisioned PV
STEP: checking the PV
... skipping 509 lines ...
I0902 20:27:56.546394       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2022-09-02 20:20:53 +0000 UTC to 2032-08-30 20:25:53 +0000 UTC (now=2022-09-02 20:27:56.546322519 +0000 UTC))"
I0902 20:27:56.547046       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662150475\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662150475\" (2022-09-02 19:27:55 +0000 UTC to 2023-09-02 19:27:55 +0000 UTC (now=2022-09-02 20:27:56.547010439 +0000 UTC))"
I0902 20:27:56.547557       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662150476\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662150476\" (2022-09-02 19:27:55 +0000 UTC to 2023-09-02 19:27:55 +0000 UTC (now=2022-09-02 20:27:56.547526049 +0000 UTC))"
I0902 20:27:56.547772       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0902 20:27:56.547967       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0902 20:27:56.548684       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0902 20:27:59.113884       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0902 20:27:59.113987       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0902 20:28:02.960608       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0902 20:28:02.961086       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-clawep-control-plane-65svl_6abf870d-8ab0-4202-b8ed-b8d16817053a became leader"
W0902 20:28:03.012625       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0902 20:28:03.013299       1 azure_auth.go:232] Using AzurePublicCloud environment
I0902 20:28:03.013354       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0902 20:28:03.013428       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0902 20:28:03.015266       1 reflector.go:219] Starting reflector *v1.Node (16h33m45.633114553s) from k8s.io/client-go/informers/factory.go:134
I0902 20:28:03.015288       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0902 20:28:03.015502       1 reflector.go:219] Starting reflector *v1.ServiceAccount (16h33m45.633114553s) from k8s.io/client-go/informers/factory.go:134
I0902 20:28:03.015639       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0902 20:28:03.016083       1 reflector.go:219] Starting reflector *v1.Secret (16h33m45.633114553s) from k8s.io/client-go/informers/factory.go:134
I0902 20:28:03.016099       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
W0902 20:28:03.047361       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0902 20:28:03.047390       1 controllermanager.go:562] Starting "podgc"
I0902 20:28:03.058311       1 controllermanager.go:577] Started "podgc"
I0902 20:28:03.058338       1 controllermanager.go:562] Starting "replicationcontroller"
I0902 20:28:03.058569       1 gc_controller.go:89] Starting GC controller
I0902 20:28:03.058725       1 shared_informer.go:240] Waiting for caches to sync for GC
I0902 20:28:03.065339       1 controllermanager.go:577] Started "replicationcontroller"
... skipping 37 lines ...
I0902 20:28:03.321662       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0902 20:28:03.321851       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0902 20:28:03.322051       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0902 20:28:03.322227       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0902 20:28:03.322396       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0902 20:28:03.322581       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0902 20:28:03.322807       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0902 20:28:03.322996       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0902 20:28:03.323361       1 controllermanager.go:577] Started "attachdetach"
I0902 20:28:03.323652       1 controllermanager.go:562] Starting "resourcequota"
I0902 20:28:03.323598       1 attach_detach_controller.go:328] Starting attach detach controller
I0902 20:28:03.324093       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0902 20:28:03.631970       1 resource_quota_monitor.go:177] QuotaMonitor using a shared informer for resource "/v1, Resource=endpoints"
... skipping 117 lines ...
I0902 20:28:05.620616       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0902 20:28:05.620649       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0902 20:28:05.620669       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0902 20:28:05.620692       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0902 20:28:05.620730       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0902 20:28:05.620752       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0902 20:28:05.620776       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0902 20:28:05.620791       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0902 20:28:05.620908       1 controllermanager.go:577] Started "persistentvolume-binder"
I0902 20:28:05.620925       1 controllermanager.go:562] Starting "endpoint"
I0902 20:28:05.620989       1 pv_controller_base.go:308] Starting persistent volume controller
I0902 20:28:05.621063       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0902 20:28:05.769507       1 controllermanager.go:577] Started "endpoint"
... skipping 362 lines ...
I0902 20:28:10.131319       1 deployment_controller.go:215] "ReplicaSet added" replicaSet="kube-system/calico-kube-controllers-969cf87c4"
I0902 20:28:10.131677       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bcb7b687d923ae, ext:15194284950, loc:(*time.Location)(0x751a1a0)}}
I0902 20:28:10.132373       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/calico-kube-controllers-969cf87c4" need=1 creating=1
I0902 20:28:10.135300       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0902 20:28:10.135700       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 20:28:10.130846232 +0000 UTC m=+15.193460123 - now: 2022-09-02 20:28:10.135692341 +0000 UTC m=+15.198306232]
I0902 20:28:10.140411       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="25.095174ms"
I0902 20:28:10.140488       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0902 20:28:10.140849       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-02 20:28:10.140773128 +0000 UTC m=+15.203387119"
I0902 20:28:10.141442       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 20:28:10 +0000 UTC - now: 2022-09-02 20:28:10.141435155 +0000 UTC m=+15.204049046]
I0902 20:28:10.157423       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="16.626562ms"
I0902 20:28:10.157471       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-02 20:28:10.157448209 +0000 UTC m=+15.220062200"
I0902 20:28:10.158059       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 20:28:10 +0000 UTC - now: 2022-09-02 20:28:10.158050689 +0000 UTC m=+15.220664681]
I0902 20:28:10.158758       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0902 20:28:10.163255       1 disruption.go:384] add DB "calico-kube-controllers"
I0902 20:28:10.173266       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="15.801033ms"
I0902 20:28:10.173468       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0902 20:28:10.173672       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-02 20:28:10.173648322 +0000 UTC m=+15.236262213"
I0902 20:28:10.174299       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 20:28:10 +0000 UTC - now: 2022-09-02 20:28:10.174291804 +0000 UTC m=+15.236905795]
I0902 20:28:10.174512       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 599s
I0902 20:28:10.174708       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="1.042663ms"
I0902 20:28:10.179442       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-02 20:28:10.17938472 +0000 UTC m=+15.241998611"
I0902 20:28:10.180229       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-02 20:28:10 +0000 UTC - now: 2022-09-02 20:28:10.18022298 +0000 UTC m=+15.242836971]
... skipping 33 lines ...
I0902 20:28:10.242493       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="715.358µs"
I0902 20:28:10.247884       1 daemon_controller.go:226] Adding daemon set calico-node
I0902 20:28:10.257039       1 disruption.go:391] update DB "calico-kube-controllers"
I0902 20:28:10.263266       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (50.364175ms)
I0902 20:28:10.263428       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (111.173µs)
I0902 20:28:10.279643       1 certificate_controller.go:173] Finished syncing certificate request "csr-ckfh9" (55.328776ms)
I0902 20:28:10.279829       1 certificate_controller.go:151] Sync csr-ckfh9 failed with : recognized csr "csr-ckfh9" as [selfnodeclient nodeclient] but subject access review was not approved
I0902 20:28:10.286101       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-969cf87c4, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0902 20:28:10.300902       1 controller_utils.go:206] Controller kube-system/calico-node either never recorded expectations, or the ttl expired.
I0902 20:28:10.301263       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcb7b691f4d277, ext:15363871326, loc:(*time.Location)(0x751a1a0)}}
I0902 20:28:10.301298       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0902 20:28:10.301503       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0902 20:28:10.301564       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcb7b691f4d277, ext:15363871326, loc:(*time.Location)(0x751a1a0)}}
... skipping 23 lines ...
I0902 20:28:10.333030       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcb7b693d99275, ext:15395639900, loc:(*time.Location)(0x751a1a0)}}
I0902 20:28:10.333159       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0902 20:28:10.333334       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0902 20:28:10.333478       1 daemon_controller.go:1102] Updating daemon set status
I0902 20:28:10.333595       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (4.234406ms)
I0902 20:28:10.502333       1 certificate_controller.go:173] Finished syncing certificate request "csr-ckfh9" (20.280041ms)
I0902 20:28:10.502633       1 certificate_controller.go:151] Sync csr-ckfh9 failed with : recognized csr "csr-ckfh9" as [selfnodeclient nodeclient] but subject access review was not approved
I0902 20:28:10.907111       1 certificate_controller.go:173] Finished syncing certificate request "csr-ckfh9" (3.893769ms)
I0902 20:28:10.907486       1 certificate_controller.go:151] Sync csr-ckfh9 failed with : recognized csr "csr-ckfh9" as [selfnodeclient nodeclient] but subject access review was not approved
I0902 20:28:10.967614       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-clawep-control-plane-65svl}
I0902 20:28:10.967664       1 taint_manager.go:440] "Updating known taints on node" node="capz-clawep-control-plane-65svl" taints=[]
I0902 20:28:10.967840       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-control-plane-65svl"
W0902 20:28:10.967861       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-clawep-control-plane-65svl" does not exist
I0902 20:28:10.967887       1 controller.go:693] Ignoring node capz-clawep-control-plane-65svl with Ready condition status False
I0902 20:28:10.967927       1 controller.go:272] Triggering nodeSync
I0902 20:28:10.967938       1 controller.go:291] nodeSync has been triggered
I0902 20:28:10.967949       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 20:28:10.967961       1 controller.go:804] Finished updateLoadBalancerHosts
I0902 20:28:10.967970       1 controller.go:731] It took 2.3157e-05 seconds to finish nodeSyncInternal
... skipping 66 lines ...
I0902 20:28:11.443832       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-969cf87c4-7fgvj" podUID=0d3e4a1e-2a2e-4099-9ae2-04907822aa6f
I0902 20:28:11.443930       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-969cf87c4", timestamp:time.Time{wall:0xc0bcb7b687d923ae, ext:15194284950, loc:(*time.Location)(0x751a1a0)}}
I0902 20:28:11.444831       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-969cf87c4" (874.168µs)
I0902 20:28:11.469120       1 azure_vmss.go:369] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachines/capz-clawep-control-plane-65svl), assuming it is managed by availability set: not a vmss instance
I0902 20:28:11.489887       1 azure_vmss.go:369] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachines/capz-clawep-control-plane-65svl), assuming it is managed by availability set: not a vmss instance
I0902 20:28:11.711341       1 certificate_controller.go:173] Finished syncing certificate request "csr-ckfh9" (3.676044ms)
I0902 20:28:11.711386       1 certificate_controller.go:151] Sync csr-ckfh9 failed with : recognized csr "csr-ckfh9" as [selfnodeclient nodeclient] but subject access review was not approved
I0902 20:28:11.895696       1 controller.go:272] Triggering nodeSync
I0902 20:28:11.895726       1 controller.go:291] nodeSync has been triggered
I0902 20:28:11.895737       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 20:28:11.895747       1 controller.go:804] Finished updateLoadBalancerHosts
I0902 20:28:11.895755       1 controller.go:731] It took 2.027e-05 seconds to finish nodeSyncInternal
I0902 20:28:11.895777       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-control-plane-65svl"
... skipping 36 lines ...
I0902 20:28:12.431061       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-hpgbs, PodDisruptionBudget controller will avoid syncing.
I0902 20:28:12.431092       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-hpgbs"
I0902 20:28:12.431199       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/coredns-78fcd69978-hpgbs" podUID=6b20a2ee-396c-41df-989f-769fabd6a94e
I0902 20:28:12.430061       1 replica_set.go:443] Pod coredns-78fcd69978-hpgbs updated, objectMeta {Name:coredns-78fcd69978-hpgbs GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:6b20a2ee-396c-41df-989f-769fabd6a94e ResourceVersion:485 Generation:0 CreationTimestamp:2022-09-02 20:28:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:0b739eab-924a-4516-967f-2f0cb354f2b9 Controller:0xc000456547 BlockOwnerDeletion:0xc000456548}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 20:28:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b739eab-924a-4516-967f-2f0cb354f2b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:}]} -> {Name:coredns-78fcd69978-hpgbs GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:6b20a2ee-396c-41df-989f-769fabd6a94e ResourceVersion:492 Generation:0 CreationTimestamp:2022-09-02 20:28:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:0b739eab-924a-4516-967f-2f0cb354f2b9 Controller:0xc000e7c33f BlockOwnerDeletion:0xc000e7c360}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 20:28:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b739eab-924a-4516-967f-2f0cb354f2b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-02 20:28:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0902 20:28:12.441613       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/coredns-78fcd69978"
I0902 20:28:12.442004       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="51.460013ms"
I0902 20:28:12.442035       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0902 20:28:12.442073       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-02 20:28:12.442051914 +0000 UTC m=+17.504665906"
I0902 20:28:12.442858       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-02 20:28:12 +0000 UTC - now: 2022-09-02 20:28:12.442847481 +0000 UTC m=+17.505461372]
I0902 20:28:12.446013       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (45.923051ms)
I0902 20:28:12.446204       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0bcb7b717d96834, ext:17462737947, loc:(*time.Location)(0x751a1a0)}}
I0902 20:28:12.446443       1 replica_set_utils.go:59] Updating status for : kube-system/coredns-78fcd69978, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0902 20:28:12.457198       1 replica_set.go:443] Pod coredns-78fcd69978-pfn4l updated, objectMeta {Name:coredns-78fcd69978-pfn4l GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:5a038646-4411-4852-8e4f-029ea854c04e ResourceVersion:488 Generation:0 CreationTimestamp:2022-09-02 20:28:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:0b739eab-924a-4516-967f-2f0cb354f2b9 Controller:0xc0019c3a4f BlockOwnerDeletion:0xc0019c3a70}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 20:28:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b739eab-924a-4516-967f-2f0cb354f2b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:}]} -> {Name:coredns-78fcd69978-pfn4l GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:5a038646-4411-4852-8e4f-029ea854c04e ResourceVersion:497 Generation:0 CreationTimestamp:2022-09-02 20:28:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:0b739eab-924a-4516-967f-2f0cb354f2b9 Controller:0xc000c94fcf BlockOwnerDeletion:0xc000c95000}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 20:28:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b739eab-924a-4516-967f-2f0cb354f2b9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-02 20:28:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
... skipping 544 lines ...
I0902 20:28:51.143947       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="567.2µs"
I0902 20:28:51.150732       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (47.041158ms)
I0902 20:28:51.151047       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (281.9µs)
I0902 20:28:52.147741       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (332.03µs)
I0902 20:28:52.412470       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:28:52.524030       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:28:52.557595       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-clawep-control-plane-65svl transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 20:28:27 +0000 UTC,LastTransitionTime:2022-09-02 20:27:43 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 20:28:47 +0000 UTC,LastTransitionTime:2022-09-02 20:28:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 20:28:52.557754       1 node_lifecycle_controller.go:1047] Node capz-clawep-control-plane-65svl ReadyCondition updated. Updating timestamp.
I0902 20:28:52.557787       1 node_lifecycle_controller.go:893] Node capz-clawep-control-plane-65svl is healthy again, removing all taints
I0902 20:28:52.557807       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0902 20:28:55.023945       1 disruption.go:427] updatePod called on pod "calico-node-fjl4p"
I0902 20:28:55.024031       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-fjl4p, PodDisruptionBudget controller will avoid syncing.
I0902 20:28:55.024040       1 disruption.go:430] No matching pdb for pod "calico-node-fjl4p"
... skipping 118 lines ...
I0902 20:29:48.010515       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 20:29:48.010675       1 controller.go:804] Finished updateLoadBalancerHosts
I0902 20:29:48.010825       1 controller.go:731] It took 0.000311633 seconds to finish nodeSyncInternal
I0902 20:29:48.015920       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcb7c1c3ccea2b, ext:60126374930, loc:(*time.Location)(0x751a1a0)}}
I0902 20:29:48.021328       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:29:48.021440       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bcb7cf014713c5, ext:113084049324, loc:(*time.Location)(0x751a1a0)}}
W0902 20:29:48.021487       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-clawep-mp-0000000" does not exist
I0902 20:29:48.021576       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-clawep-mp-0000000], creating 1
I0902 20:29:48.022021       1 certificate_controller.go:87] Updating certificate request csr-5fv68
I0902 20:29:48.022046       1 certificate_controller.go:87] Updating certificate request csr-5fv68
I0902 20:29:48.017118       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bcb7ba19f79965, ext:29498270952, loc:(*time.Location)(0x751a1a0)}}
I0902 20:29:48.022414       1 certificate_controller.go:173] Finished syncing certificate request "csr-5fv68" (353.037µs)
I0902 20:29:48.022439       1 certificate_controller.go:87] Updating certificate request csr-5fv68
... skipping 130 lines ...
I0902 20:29:52.670623       1 controller_utils.go:221] Made sure that Node capz-clawep-mp-0000000 has no [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I0902 20:29:52.670772       1 taint_manager.go:440] "Updating known taints on node" node="capz-clawep-mp-0000000" taints=[{Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2022-09-02 20:29:52 +0000 UTC}]
I0902 20:29:52.670813       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:29:52.670909       1 taint_manager.go:361] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="kube-system/calico-node-lfqz6"
I0902 20:29:52.670935       1 taint_manager.go:361] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="kube-system/kube-proxy-hl9jz"
I0902 20:29:53.175614       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000001"
W0902 20:29:53.175907       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-clawep-mp-0000001" does not exist
I0902 20:29:53.176658       1 controller.go:693] Ignoring node capz-clawep-mp-0000000 with Ready condition status False
I0902 20:29:53.176807       1 controller.go:693] Ignoring node capz-clawep-mp-0000001 with Ready condition status False
I0902 20:29:53.176954       1 controller.go:272] Triggering nodeSync
I0902 20:29:53.177086       1 controller.go:291] nodeSync has been triggered
I0902 20:29:53.177209       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 20:29:53.177346       1 controller.go:804] Finished updateLoadBalancerHosts
... skipping 343 lines ...
I0902 20:30:15.433216       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:30:15.450547       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:30:15.634162       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:30:17.448171       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-clawep-mp-0000000"
I0902 20:30:17.448217       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Creating" for node "capz-clawep-mp-0000000"
I0902 20:30:17.520851       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="73.208µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48030" resp=200
I0902 20:30:17.570073       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-clawep-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 20:30:03 +0000 UTC,LastTransitionTime:2022-09-02 20:29:53 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 20:30:13 +0000 UTC,LastTransitionTime:2022-09-02 20:30:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 20:30:17.570166       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000001 ReadyCondition updated. Updating timestamp.
I0902 20:30:17.581887       1 node_lifecycle_controller.go:893] Node capz-clawep-mp-0000001 is healthy again, removing all taints
I0902 20:30:17.581967       1 node_lifecycle_controller.go:1214] Controller detected that zone eastus2::1 is now in state Normal.
I0902 20:30:17.582318       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-clawep-mp-0000001}
I0902 20:30:17.582486       1 taint_manager.go:440] "Updating known taints on node" node="capz-clawep-mp-0000001" taints=[]
I0902 20:30:17.582639       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-clawep-mp-0000001"
... skipping 51 lines ...
I0902 20:30:21.622670       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0902 20:30:21.622818       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0902 20:30:21.622854       1 daemon_controller.go:1102] Updating daemon set status
I0902 20:30:21.623506       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.584558ms)
I0902 20:30:22.416417       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:30:22.528383       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:30:22.583134       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-clawep-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-02 20:29:58 +0000 UTC,LastTransitionTime:2022-09-02 20:29:47 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-02 20:30:18 +0000 UTC,LastTransitionTime:2022-09-02 20:30:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0902 20:30:22.583220       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000000 ReadyCondition updated. Updating timestamp.
I0902 20:30:22.592614       1 node_lifecycle_controller.go:893] Node capz-clawep-mp-0000000 is healthy again, removing all taints
I0902 20:30:22.593994       1 node_lifecycle_controller.go:1214] Controller detected that zone eastus2::0 is now in state Normal.
I0902 20:30:22.593560       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-clawep-mp-0000000}
I0902 20:30:22.594406       1 taint_manager.go:440] "Updating known taints on node" node="capz-clawep-mp-0000000" taints=[]
I0902 20:30:22.594611       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-clawep-mp-0000000"
... skipping 192 lines ...
I0902 20:32:03.476818       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-gqj2g, uid d1d11dab-95f8-400c-89fb-6ec78e687066, event type delete
I0902 20:32:03.533812       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8081, estimate: 0, errors: <nil>
I0902 20:32:03.533909       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.7µs)
I0902 20:32:03.544407       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (194.783741ms)
I0902 20:32:03.662728       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2540
I0902 20:32:03.697109       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-zktfq, uid 648ce979-e3dd-43ae-b634-8091236f4039, event type delete
E0902 20:32:03.712788       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-8p2sj" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0902 20:32:03.726436       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2540, name kube-root-ca.crt, uid 0b6314ec-93cc-40c6-b0ec-9e32b95ca610, event type delete
I0902 20:32:03.729096       1 publisher.go:186] Finished syncing namespace "azuredisk-2540" (2.452449ms)
I0902 20:32:03.736645       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0902 20:32:03.736688       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid d6c01c22-60a9-4776-b837-b526f5dc8c5c, event type delete
I0902 20:32:03.737194       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.301µs)
I0902 20:32:03.804060       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3.6µs)
... skipping 29 lines ...
I0902 20:32:04.317250       1 publisher.go:186] Finished syncing namespace "azuredisk-5466" (2.377442ms)
I0902 20:32:04.414182       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5466" (3µs)
I0902 20:32:04.415719       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5466, estimate: 0, errors: <nil>
I0902 20:32:04.427261       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-5466" (171.513634ms)
I0902 20:32:04.560114       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0902 20:32:04.610454       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-pr6rs, uid 926234c3-9886-48f6-88e7-4aaeb2834ecf, event type delete
E0902 20:32:04.631078       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-svp72" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0902 20:32:04.647838       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2790, name kube-root-ca.crt, uid 65b9e134-01ff-4b39-8820-4705746c89dd, event type delete
I0902 20:32:04.650183       1 publisher.go:186] Finished syncing namespace "azuredisk-2790" (2.116315ms)
I0902 20:32:04.667620       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
I0902 20:32:04.667847       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2790, name default, uid 7af11f82-9aa0-45f1-817b-f90237554549, event type delete
I0902 20:32:04.668068       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (2.801µs)
I0902 20:32:04.725238       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (2µs)
I0902 20:32:04.725595       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2790, estimate: 0, errors: <nil>
I0902 20:32:04.766843       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2790" (209.876633ms)
I0902 20:32:04.864923       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5356
I0902 20:32:04.942574       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5356, name kube-root-ca.crt, uid 8068be7f-a3f4-4aa4-8884-f34ff6607783, event type delete
I0902 20:32:04.945350       1 publisher.go:186] Finished syncing namespace "azuredisk-5356" (2.512755ms)
I0902 20:32:04.991652       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5356, name default-token-jrtbm, uid 3457c19e-2645-4e6a-9df9-6501f703e7b7, event type delete
E0902 20:32:05.054163       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-xv4sc" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I0902 20:32:05.161141       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5356/default), service account deleted, removing tokens
I0902 20:32:05.161460       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5356, name default, uid 5ebd4e86-dbcc-47e6-9d14-e75ae84411ba, event type delete
I0902 20:32:05.161857       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (3µs)
I0902 20:32:05.174760       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5194
I0902 20:32:05.200347       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5356" (1.701µs)
I0902 20:32:05.200562       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-5356, estimate: 0, errors: <nil>
... skipping 170 lines ...
I0902 20:32:33.625217       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: claim azuredisk-1353/pvc-qhzsv not found
I0902 20:32:33.625397       1 pv_controller.go:1108] reclaimVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: policy is Delete
I0902 20:32:33.625541       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]]
I0902 20:32:33.625558       1 pv_controller.go:1763] operation "delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]" is already running, skipping
I0902 20:32:33.626691       1 pv_controller.go:1340] isVolumeReleased[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is released
I0902 20:32:33.626710       1 pv_controller.go:1404] doDeleteVolume [pvc-0aa46493-c66f-4355-8724-5869529b8f51]
I0902 20:32:33.673329       1 pv_controller.go:1259] deletion of volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:32:33.673384       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: set phase Failed
I0902 20:32:33.673404       1 pv_controller.go:858] updating PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: set phase Failed
I0902 20:32:33.678398       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" with version 1254
I0902 20:32:33.678752       1 pv_controller.go:879] volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" entered phase "Failed"
I0902 20:32:33.679019       1 pv_controller.go:901] volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
E0902 20:32:33.679243       1 goroutinemap.go:150] Operation for "delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]" failed. No retries permitted until 2022-09-02 20:32:34.17922204 +0000 UTC m=+279.241836031 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:32:33.678441       1 pv_protection_controller.go:205] Got event on PV pvc-0aa46493-c66f-4355-8724-5869529b8f51
I0902 20:32:33.678458       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" with version 1254
I0902 20:32:33.679675       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase: Failed, bound to: "azuredisk-1353/pvc-qhzsv (uid: 0aa46493-c66f-4355-8724-5869529b8f51)", boundByController: true
I0902 20:32:33.679870       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is bound to claim azuredisk-1353/pvc-qhzsv
I0902 20:32:33.680116       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: claim azuredisk-1353/pvc-qhzsv not found
I0902 20:32:33.680265       1 pv_controller.go:1108] reclaimVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: policy is Delete
I0902 20:32:33.680430       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]]
I0902 20:32:33.680575       1 pv_controller.go:1765] operation "delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]" postponed due to exponential backoff
I0902 20:32:33.679700       1 event.go:291] "Event occurred" object="pvc-0aa46493-c66f-4355-8724-5869529b8f51" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
I0902 20:32:37.423052       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:32:37.427377       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:32:37.520137       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="84.609µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:42666" resp=200
I0902 20:32:37.534171       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:32:37.534357       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" with version 1254
I0902 20:32:37.534439       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase: Failed, bound to: "azuredisk-1353/pvc-qhzsv (uid: 0aa46493-c66f-4355-8724-5869529b8f51)", boundByController: true
I0902 20:32:37.534509       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is bound to claim azuredisk-1353/pvc-qhzsv
I0902 20:32:37.534535       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: claim azuredisk-1353/pvc-qhzsv not found
I0902 20:32:37.534547       1 pv_controller.go:1108] reclaimVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: policy is Delete
I0902 20:32:37.534621       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]]
I0902 20:32:37.534693       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0aa46493-c66f-4355-8724-5869529b8f51] started
I0902 20:32:37.540665       1 pv_controller.go:1340] isVolumeReleased[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is released
I0902 20:32:37.540685       1 pv_controller.go:1404] doDeleteVolume [pvc-0aa46493-c66f-4355-8724-5869529b8f51]
I0902 20:32:37.563833       1 pv_controller.go:1259] deletion of volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:32:37.563877       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: set phase Failed
I0902 20:32:37.563913       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase Failed already set
E0902 20:32:37.563940       1 goroutinemap.go:150] Operation for "delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]" failed. No retries permitted until 2022-09-02 20:32:38.56392201 +0000 UTC m=+283.626535901 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:32:37.905233       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:32:38.228429       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:32:38.228526       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51 to the node "capz-clawep-mp-0000000" mounted false
I0902 20:32:38.337130       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:32:38.337168       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51 to the node "capz-clawep-mp-0000000" mounted false
I0902 20:32:38.338098       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-clawep-mp-0000000" succeeded. VolumesAttached: []
... skipping 9 lines ...
I0902 20:32:47.576365       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:32:48.242547       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:32:48.242807       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51 to the node "capz-clawep-mp-0000000" mounted false
I0902 20:32:52.423288       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:32:52.534960       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:32:52.535242       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" with version 1254
I0902 20:32:52.535300       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase: Failed, bound to: "azuredisk-1353/pvc-qhzsv (uid: 0aa46493-c66f-4355-8724-5869529b8f51)", boundByController: true
I0902 20:32:52.535349       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is bound to claim azuredisk-1353/pvc-qhzsv
I0902 20:32:52.535395       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: claim azuredisk-1353/pvc-qhzsv not found
I0902 20:32:52.535415       1 pv_controller.go:1108] reclaimVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: policy is Delete
I0902 20:32:52.535436       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]]
I0902 20:32:52.535485       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0aa46493-c66f-4355-8724-5869529b8f51] started
I0902 20:32:52.547258       1 pv_controller.go:1340] isVolumeReleased[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is released
I0902 20:32:52.547280       1 pv_controller.go:1404] doDeleteVolume [pvc-0aa46493-c66f-4355-8724-5869529b8f51]
I0902 20:32:52.547313       1 pv_controller.go:1259] deletion of volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) since it's in attaching or detaching state
I0902 20:32:52.547326       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: set phase Failed
I0902 20:32:52.547337       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase Failed already set
E0902 20:32:52.547365       1 goroutinemap.go:150] Operation for "delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]" failed. No retries permitted until 2022-09-02 20:32:54.547346416 +0000 UTC m=+299.609960407 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) since it's in attaching or detaching state
I0902 20:32:52.617313       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000000 ReadyCondition updated. Updating timestamp.
I0902 20:32:53.717610       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) returned with <nil>
I0902 20:32:53.717842       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51) succeeded
I0902 20:32:53.717932       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51 was detached from node:capz-clawep-mp-0000000
I0902 20:32:53.718030       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51") on node "capz-clawep-mp-0000000" 
I0902 20:32:57.520252       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="197.52µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38660" resp=200
... skipping 6 lines ...
I0902 20:33:07.477560       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 20:33:07.477595       1 controller.go:804] Finished updateLoadBalancerHosts
I0902 20:33:07.477605       1 controller.go:731] It took 4.5004e-05 seconds to finish nodeSyncInternal
I0902 20:33:07.520946       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.608µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47838" resp=200
I0902 20:33:07.535561       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:33:07.535621       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" with version 1254
I0902 20:33:07.535662       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase: Failed, bound to: "azuredisk-1353/pvc-qhzsv (uid: 0aa46493-c66f-4355-8724-5869529b8f51)", boundByController: true
I0902 20:33:07.535736       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is bound to claim azuredisk-1353/pvc-qhzsv
I0902 20:33:07.535785       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: claim azuredisk-1353/pvc-qhzsv not found
I0902 20:33:07.535800       1 pv_controller.go:1108] reclaimVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: policy is Delete
I0902 20:33:07.535818       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]]
I0902 20:33:07.535883       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0aa46493-c66f-4355-8724-5869529b8f51] started
I0902 20:33:07.539710       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
... skipping 5 lines ...
I0902 20:33:12.739371       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-0aa46493-c66f-4355-8724-5869529b8f51
I0902 20:33:12.739416       1 pv_controller.go:1435] volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" deleted
I0902 20:33:12.739428       1 pv_controller.go:1283] deleteVolumeOperation [pvc-0aa46493-c66f-4355-8724-5869529b8f51]: success
I0902 20:33:12.747827       1 pv_protection_controller.go:205] Got event on PV pvc-0aa46493-c66f-4355-8724-5869529b8f51
I0902 20:33:12.747880       1 pv_protection_controller.go:125] Processing PV pvc-0aa46493-c66f-4355-8724-5869529b8f51
I0902 20:33:12.748227       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" with version 1313
I0902 20:33:12.748273       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: phase: Failed, bound to: "azuredisk-1353/pvc-qhzsv (uid: 0aa46493-c66f-4355-8724-5869529b8f51)", boundByController: true
I0902 20:33:12.748301       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: volume is bound to claim azuredisk-1353/pvc-qhzsv
I0902 20:33:12.748331       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: claim azuredisk-1353/pvc-qhzsv not found
I0902 20:33:12.748344       1 pv_controller.go:1108] reclaimVolume[pvc-0aa46493-c66f-4355-8724-5869529b8f51]: policy is Delete
I0902 20:33:12.748362       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0aa46493-c66f-4355-8724-5869529b8f51[89b784fe-232a-43e7-8038-f03272eb5eac]]
I0902 20:33:12.748388       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0aa46493-c66f-4355-8724-5869529b8f51] started
I0902 20:33:12.759955       1 pv_controller.go:1243] Volume "pvc-0aa46493-c66f-4355-8724-5869529b8f51" is already being deleted
... skipping 121 lines ...
I0902 20:33:19.189736       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-qhzsv.171125462c7ca453, uid d5e6b4a6-16c7-4226-bbb2-cac08d19d769, event type delete
I0902 20:33:19.193728       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name pvc-qhzsv.17112546bff8f636, uid cfabae78-5b6e-4ee6-9fcb-92b20ad3c015, event type delete
I0902 20:33:19.195526       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" to node "capz-clawep-mp-0000000".
I0902 20:33:19.236971       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-ddbsn, uid 8aafeb46-d8e7-4ba1-8709-b9f6bcb62c94, event type delete
I0902 20:33:19.250054       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" lun 0 to node "capz-clawep-mp-0000000".
I0902 20:33:19.250417       1 azure_controller_vmss.go:101] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - attach disk(capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) with DiskEncryptionSetID()
E0902 20:33:19.256297       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-tpdhk" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0902 20:33:19.260849       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1353/default), service account deleted, removing tokens
I0902 20:33:19.260898       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1353, name default, uid fce0390e-5712-4176-84d7-6032bf5f9c18, event type delete
I0902 20:33:19.261745       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (2.8µs)
I0902 20:33:19.272694       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1353" (2.501µs)
I0902 20:33:19.272807       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1353, estimate: 0, errors: <nil>
I0902 20:33:19.288987       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1353" (232.938742ms)
... skipping 8 lines ...
I0902 20:33:19.906311       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2888" (3.4µs)
I0902 20:33:19.906507       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2888, estimate: 0, errors: <nil>
I0902 20:33:19.938805       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2888" (189.112195ms)
I0902 20:33:20.403976       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-156
I0902 20:33:20.421106       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 18 items received
I0902 20:33:20.436977       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-156, name default-token-nw54k, uid e9fc3aa0-b0c4-48f8-b946-2e10bd8f8ab0, event type delete
E0902 20:33:20.463785       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-156/default: secrets "default-token-2b9kh" is forbidden: unable to create new content in namespace azuredisk-156 because it is being terminated
I0902 20:33:20.497156       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-156, name default, uid 8402d8c5-95ad-4e04-b86d-4d39b964ef3e, event type delete
I0902 20:33:20.497166       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-156" (2.4µs)
I0902 20:33:20.497617       1 tokens_controller.go:252] syncServiceAccount(azuredisk-156/default), service account deleted, removing tokens
I0902 20:33:20.522817       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-156, name kube-root-ca.crt, uid 1d73dd8c-5bc6-43ab-a80d-963736e9d15a, event type delete
I0902 20:33:20.526026       1 publisher.go:186] Finished syncing namespace "azuredisk-156" (2.996304ms)
I0902 20:33:20.563366       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-156" (2.4µs)
... skipping 381 lines ...
I0902 20:35:23.928767       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: claim azuredisk-1563/pvc-25xpm not found
I0902 20:35:23.928933       1 pv_controller.go:1108] reclaimVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: policy is Delete
I0902 20:35:23.929103       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]]
I0902 20:35:23.929253       1 pv_controller.go:1763] operation "delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]" is already running, skipping
I0902 20:35:23.930480       1 pv_controller.go:1340] isVolumeReleased[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is released
I0902 20:35:23.930498       1 pv_controller.go:1404] doDeleteVolume [pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]
I0902 20:35:23.956466       1 pv_controller.go:1259] deletion of volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:35:23.956488       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: set phase Failed
I0902 20:35:23.956497       1 pv_controller.go:858] updating PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: set phase Failed
I0902 20:35:23.959938       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" with version 1574
I0902 20:35:23.960191       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: phase: Failed, bound to: "azuredisk-1563/pvc-25xpm (uid: ffbc4b97-e6bd-4891-97e7-b592c2f516d7)", boundByController: true
I0902 20:35:23.960335       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is bound to claim azuredisk-1563/pvc-25xpm
I0902 20:35:23.960424       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: claim azuredisk-1563/pvc-25xpm not found
I0902 20:35:23.960489       1 pv_controller.go:1108] reclaimVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: policy is Delete
I0902 20:35:23.960511       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]]
I0902 20:35:23.960598       1 pv_controller.go:1763] operation "delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]" is already running, skipping
I0902 20:35:23.960681       1 pv_protection_controller.go:205] Got event on PV pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7
I0902 20:35:23.961448       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" with version 1574
I0902 20:35:23.961474       1 pv_controller.go:879] volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" entered phase "Failed"
I0902 20:35:23.961504       1 pv_controller.go:901] volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:35:23.961913       1 event.go:291] "Event occurred" object="pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
E0902 20:35:23.962245       1 goroutinemap.go:150] Operation for "delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]" failed. No retries permitted until 2022-09-02 20:35:24.461537816 +0000 UTC m=+449.524151707 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:35:25.022594       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 76 items received
I0902 20:35:25.695706       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0902 20:35:27.520301       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.106µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46776" resp=200
I0902 20:35:27.580793       1 gc_controller.go:161] GC'ing orphaned
I0902 20:35:27.580849       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:35:27.642129       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000001 ReadyCondition updated. Updating timestamp.
... skipping 12 lines ...
I0902 20:35:35.467474       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 59 items received
I0902 20:35:37.430546       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:35:37.431633       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:35:37.520271       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="142.513µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50690" resp=200
I0902 20:35:37.541659       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:35:37.541724       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" with version 1574
I0902 20:35:37.541766       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: phase: Failed, bound to: "azuredisk-1563/pvc-25xpm (uid: ffbc4b97-e6bd-4891-97e7-b592c2f516d7)", boundByController: true
I0902 20:35:37.541806       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is bound to claim azuredisk-1563/pvc-25xpm
I0902 20:35:37.541832       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: claim azuredisk-1563/pvc-25xpm not found
I0902 20:35:37.541842       1 pv_controller.go:1108] reclaimVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: policy is Delete
I0902 20:35:37.541860       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]]
I0902 20:35:37.541893       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7] started
I0902 20:35:37.546819       1 pv_controller.go:1340] isVolumeReleased[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is released
I0902 20:35:37.546840       1 pv_controller.go:1404] doDeleteVolume [pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]
I0902 20:35:37.546910       1 pv_controller.go:1259] deletion of volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) since it's in attaching or detaching state
I0902 20:35:37.546945       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: set phase Failed
I0902 20:35:37.547012       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: phase Failed already set
E0902 20:35:37.547076       1 goroutinemap.go:150] Operation for "delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]" failed. No retries permitted until 2022-09-02 20:35:38.547022952 +0000 UTC m=+463.609636943 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) since it's in attaching or detaching state
I0902 20:35:38.002295       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:35:41.424969       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 44 items received
I0902 20:35:43.756299       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) returned with <nil>
I0902 20:35:43.756360       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7) succeeded
I0902 20:35:43.756372       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7 was detached from node:capz-clawep-mp-0000000
I0902 20:35:43.756402       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7") on node "capz-clawep-mp-0000000" 
I0902 20:35:47.520183       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="104.11µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36414" resp=200
I0902 20:35:47.581109       1 gc_controller.go:161] GC'ing orphaned
I0902 20:35:47.581167       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:35:49.377788       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0902 20:35:52.431509       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:35:52.542257       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:35:52.542375       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" with version 1574
I0902 20:35:52.542458       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: phase: Failed, bound to: "azuredisk-1563/pvc-25xpm (uid: ffbc4b97-e6bd-4891-97e7-b592c2f516d7)", boundByController: true
I0902 20:35:52.542537       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is bound to claim azuredisk-1563/pvc-25xpm
I0902 20:35:52.542565       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: claim azuredisk-1563/pvc-25xpm not found
I0902 20:35:52.542600       1 pv_controller.go:1108] reclaimVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: policy is Delete
I0902 20:35:52.542635       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]]
I0902 20:35:52.542685       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7] started
I0902 20:35:52.546109       1 pv_controller.go:1340] isVolumeReleased[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is released
... skipping 3 lines ...
I0902 20:35:57.738341       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7
I0902 20:35:57.738373       1 pv_controller.go:1435] volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" deleted
I0902 20:35:57.738386       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: success
I0902 20:35:57.746387       1 pv_protection_controller.go:205] Got event on PV pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7
I0902 20:35:57.746629       1 pv_protection_controller.go:125] Processing PV pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7
I0902 20:35:57.746562       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" with version 1626
I0902 20:35:57.746983       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: phase: Failed, bound to: "azuredisk-1563/pvc-25xpm (uid: ffbc4b97-e6bd-4891-97e7-b592c2f516d7)", boundByController: true
I0902 20:35:57.747126       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: volume is bound to claim azuredisk-1563/pvc-25xpm
I0902 20:35:57.747240       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: claim azuredisk-1563/pvc-25xpm not found
I0902 20:35:57.747357       1 pv_controller.go:1108] reclaimVolume[pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7]: policy is Delete
I0902 20:35:57.747476       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7[80b47663-8488-4e7c-81dd-df5496425843]]
I0902 20:35:57.747641       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7] started
I0902 20:35:57.751670       1 pv_controller.go:1243] Volume "pvc-ffbc4b97-e6bd-4891-97e7-b592c2f516d7" is already being deleted
... skipping 103 lines ...
I0902 20:36:03.189619       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878" to node "capz-clawep-mp-0000001".
I0902 20:36:03.213532       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878" lun 0 to node "capz-clawep-mp-0000001".
I0902 20:36:03.213575       1 azure_controller_vmss.go:101] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000001) - attach disk(capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) with DiskEncryptionSetID()
I0902 20:36:03.576834       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000001"
I0902 20:36:04.451179       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1563
I0902 20:36:04.475820       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1563, name default-token-ssccx, uid 18866aa5-3c2e-48e5-b3aa-d8b1d0cbd859, event type delete
E0902 20:36:04.494132       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1563/default: secrets "default-token-g2fhp" is forbidden: unable to create new content in namespace azuredisk-1563 because it is being terminated
I0902 20:36:04.532026       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-8pp78.17112558650394b6, uid 5d5fb562-0ba8-45f8-8617-6c9f92d1643b, event type delete
I0902 20:36:04.535741       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-8pp78.1711255c08e6f80c, uid a8bc6601-78c3-4c9c-b683-dde7ec0444f8, event type delete
I0902 20:36:04.539004       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-8pp78.1711255c96f15a42, uid e4341892-032d-4401-8b8d-5e70c521fa70, event type delete
I0902 20:36:04.542981       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-8pp78.1711255d2163cd45, uid e6c99c6e-470c-4888-98a5-7286d6085ac9, event type delete
I0902 20:36:04.546776       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-8pp78.1711255dc43d6e58, uid 6dd25538-a7ff-4121-aa7c-fb2a44fc13f9, event type delete
I0902 20:36:04.550109       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1563, name azuredisk-volume-tester-8pp78.1711255ea118d0ca, uid f7cd97eb-f360-4826-a6b7-94d4a892acd3, event type delete
... skipping 134 lines ...
I0902 20:36:20.734847       1 pv_controller.go:1108] reclaimVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: policy is Delete
I0902 20:36:20.734946       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]]
I0902 20:36:20.734963       1 pv_controller.go:1763] operation "delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]" is already running, skipping
I0902 20:36:20.734741       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4c087965-1007-4357-972d-d588a3c26878] started
I0902 20:36:20.737199       1 pv_controller.go:1340] isVolumeReleased[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is released
I0902 20:36:20.737219       1 pv_controller.go:1404] doDeleteVolume [pvc-4c087965-1007-4357-972d-d588a3c26878]
I0902 20:36:20.760577       1 pv_controller.go:1259] deletion of volume "pvc-4c087965-1007-4357-972d-d588a3c26878" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:36:20.760598       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4c087965-1007-4357-972d-d588a3c26878]: set phase Failed
I0902 20:36:20.760607       1 pv_controller.go:858] updating PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: set phase Failed
I0902 20:36:20.764280       1 pv_protection_controller.go:205] Got event on PV pvc-4c087965-1007-4357-972d-d588a3c26878
I0902 20:36:20.764351       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4c087965-1007-4357-972d-d588a3c26878" with version 1723
I0902 20:36:20.764388       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase: Failed, bound to: "azuredisk-7463/pvc-6tzst (uid: 4c087965-1007-4357-972d-d588a3c26878)", boundByController: true
I0902 20:36:20.764464       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is bound to claim azuredisk-7463/pvc-6tzst
I0902 20:36:20.764489       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: claim azuredisk-7463/pvc-6tzst not found
I0902 20:36:20.764497       1 pv_controller.go:1108] reclaimVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: policy is Delete
I0902 20:36:20.764551       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]]
I0902 20:36:20.764560       1 pv_controller.go:1763] operation "delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]" is already running, skipping
I0902 20:36:20.764703       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4c087965-1007-4357-972d-d588a3c26878" with version 1723
I0902 20:36:20.764886       1 pv_controller.go:879] volume "pvc-4c087965-1007-4357-972d-d588a3c26878" entered phase "Failed"
I0902 20:36:20.764907       1 pv_controller.go:901] volume "pvc-4c087965-1007-4357-972d-d588a3c26878" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
E0902 20:36:20.765123       1 goroutinemap.go:150] Operation for "delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]" failed. No retries permitted until 2022-09-02 20:36:21.265102891 +0000 UTC m=+506.327716882 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:36:20.765285       1 event.go:291] "Event occurred" object="pvc-4c087965-1007-4357-972d-d588a3c26878" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted"
I0902 20:36:22.432582       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:36:22.543474       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:36:22.543526       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4c087965-1007-4357-972d-d588a3c26878" with version 1723
I0902 20:36:22.543779       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase: Failed, bound to: "azuredisk-7463/pvc-6tzst (uid: 4c087965-1007-4357-972d-d588a3c26878)", boundByController: true
I0902 20:36:22.543946       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is bound to claim azuredisk-7463/pvc-6tzst
I0902 20:36:22.544101       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: claim azuredisk-7463/pvc-6tzst not found
I0902 20:36:22.544271       1 pv_controller.go:1108] reclaimVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: policy is Delete
I0902 20:36:22.544395       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]]
I0902 20:36:22.544968       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4c087965-1007-4357-972d-d588a3c26878] started
I0902 20:36:22.558396       1 pv_controller.go:1340] isVolumeReleased[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is released
I0902 20:36:22.559157       1 pv_controller.go:1404] doDeleteVolume [pvc-4c087965-1007-4357-972d-d588a3c26878]
I0902 20:36:22.582724       1 pv_controller.go:1259] deletion of volume "pvc-4c087965-1007-4357-972d-d588a3c26878" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:36:22.582754       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4c087965-1007-4357-972d-d588a3c26878]: set phase Failed
I0902 20:36:22.582765       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase Failed already set
E0902 20:36:22.582794       1 goroutinemap.go:150] Operation for "delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]" failed. No retries permitted until 2022-09-02 20:36:23.582775012 +0000 UTC m=+508.645389003 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:36:23.597069       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000001"
I0902 20:36:23.599770       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878 to the node "capz-clawep-mp-0000001" mounted false
I0902 20:36:23.642203       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-clawep-mp-0000001" succeeded. VolumesAttached: []
I0902 20:36:23.643071       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-4c087965-1007-4357-972d-d588a3c26878" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878") on node "capz-clawep-mp-0000001" 
I0902 20:36:23.643693       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000001"
I0902 20:36:23.644831       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878 to the node "capz-clawep-mp-0000001" mounted false
... skipping 18 lines ...
I0902 20:36:32.630494       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 0 items received
I0902 20:36:37.432475       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:36:37.432742       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:36:37.519972       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="192.119µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47972" resp=200
I0902 20:36:37.544137       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:36:37.544311       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4c087965-1007-4357-972d-d588a3c26878" with version 1723
I0902 20:36:37.544373       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase: Failed, bound to: "azuredisk-7463/pvc-6tzst (uid: 4c087965-1007-4357-972d-d588a3c26878)", boundByController: true
I0902 20:36:37.544412       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is bound to claim azuredisk-7463/pvc-6tzst
I0902 20:36:37.544459       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: claim azuredisk-7463/pvc-6tzst not found
I0902 20:36:37.544476       1 pv_controller.go:1108] reclaimVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: policy is Delete
I0902 20:36:37.544510       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]]
I0902 20:36:37.544551       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4c087965-1007-4357-972d-d588a3c26878] started
I0902 20:36:37.555504       1 pv_controller.go:1340] isVolumeReleased[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is released
I0902 20:36:37.555525       1 pv_controller.go:1404] doDeleteVolume [pvc-4c087965-1007-4357-972d-d588a3c26878]
I0902 20:36:37.555583       1 pv_controller.go:1259] deletion of volume "pvc-4c087965-1007-4357-972d-d588a3c26878" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) since it's in attaching or detaching state
I0902 20:36:37.555627       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4c087965-1007-4357-972d-d588a3c26878]: set phase Failed
I0902 20:36:37.555690       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase Failed already set
E0902 20:36:37.555838       1 goroutinemap.go:150] Operation for "delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]" failed. No retries permitted until 2022-09-02 20:36:39.555792522 +0000 UTC m=+524.618406413 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) since it's in attaching or detaching state
I0902 20:36:38.035002       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:36:39.076862       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) returned with <nil>
I0902 20:36:39.076921       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878) succeeded
I0902 20:36:39.076934       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878 was detached from node:capz-clawep-mp-0000001
I0902 20:36:39.077156       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-4c087965-1007-4357-972d-d588a3c26878" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878") on node "capz-clawep-mp-0000001" 
I0902 20:36:39.391494       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
... skipping 3 lines ...
I0902 20:36:47.590582       1 gc_controller.go:161] GC'ing orphaned
I0902 20:36:47.590610       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:36:51.826396       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0902 20:36:52.433416       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:36:52.544230       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:36:52.544440       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4c087965-1007-4357-972d-d588a3c26878" with version 1723
I0902 20:36:52.544541       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase: Failed, bound to: "azuredisk-7463/pvc-6tzst (uid: 4c087965-1007-4357-972d-d588a3c26878)", boundByController: true
I0902 20:36:52.544587       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is bound to claim azuredisk-7463/pvc-6tzst
I0902 20:36:52.544614       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: claim azuredisk-7463/pvc-6tzst not found
I0902 20:36:52.544627       1 pv_controller.go:1108] reclaimVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: policy is Delete
I0902 20:36:52.544646       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]]
I0902 20:36:52.544697       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4c087965-1007-4357-972d-d588a3c26878] started
I0902 20:36:52.549725       1 pv_controller.go:1340] isVolumeReleased[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is released
... skipping 5 lines ...
I0902 20:36:57.760581       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4c087965-1007-4357-972d-d588a3c26878
I0902 20:36:57.760620       1 pv_controller.go:1435] volume "pvc-4c087965-1007-4357-972d-d588a3c26878" deleted
I0902 20:36:57.760634       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4c087965-1007-4357-972d-d588a3c26878]: success
I0902 20:36:57.766362       1 pv_protection_controller.go:205] Got event on PV pvc-4c087965-1007-4357-972d-d588a3c26878
I0902 20:36:57.766507       1 pv_protection_controller.go:125] Processing PV pvc-4c087965-1007-4357-972d-d588a3c26878
I0902 20:36:57.766992       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4c087965-1007-4357-972d-d588a3c26878" with version 1778
I0902 20:36:57.767071       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: phase: Failed, bound to: "azuredisk-7463/pvc-6tzst (uid: 4c087965-1007-4357-972d-d588a3c26878)", boundByController: true
I0902 20:36:57.767238       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: volume is bound to claim azuredisk-7463/pvc-6tzst
I0902 20:36:57.767266       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: claim azuredisk-7463/pvc-6tzst not found
I0902 20:36:57.767276       1 pv_controller.go:1108] reclaimVolume[pvc-4c087965-1007-4357-972d-d588a3c26878]: policy is Delete
I0902 20:36:57.767316       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]]
I0902 20:36:57.767325       1 pv_controller.go:1763] operation "delete-pvc-4c087965-1007-4357-972d-d588a3c26878[4c2d3cbb-f5e8-4157-b50f-2d6ca4db7238]" is already running, skipping
I0902 20:36:57.775084       1 pv_controller_base.go:235] volume "pvc-4c087965-1007-4357-972d-d588a3c26878" deleted
... skipping 270 lines ...
I0902 20:37:26.516210       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: claim azuredisk-9241/pvc-7hvdh not found
I0902 20:37:26.516307       1 pv_controller.go:1108] reclaimVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: policy is Delete
I0902 20:37:26.516554       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]]
I0902 20:37:26.516681       1 pv_controller.go:1763] operation "delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]" is already running, skipping
I0902 20:37:26.517874       1 pv_controller.go:1340] isVolumeReleased[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is released
I0902 20:37:26.518061       1 pv_controller.go:1404] doDeleteVolume [pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]
I0902 20:37:26.552467       1 pv_controller.go:1259] deletion of volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:37:26.553982       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: set phase Failed
I0902 20:37:26.554002       1 pv_controller.go:858] updating PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: set phase Failed
I0902 20:37:26.559542       1 pv_protection_controller.go:205] Got event on PV pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b
I0902 20:37:26.559776       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" with version 1875
I0902 20:37:26.559998       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: phase: Failed, bound to: "azuredisk-9241/pvc-7hvdh (uid: 9c72e22a-3384-4c31-9c3b-e71571d1dc1b)", boundByController: true
I0902 20:37:26.560573       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is bound to claim azuredisk-9241/pvc-7hvdh
I0902 20:37:26.560755       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: claim azuredisk-9241/pvc-7hvdh not found
I0902 20:37:26.560769       1 pv_controller.go:1108] reclaimVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: policy is Delete
I0902 20:37:26.560008       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" with version 1875
I0902 20:37:26.560976       1 pv_controller.go:879] volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" entered phase "Failed"
I0902 20:37:26.560992       1 pv_controller.go:901] volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
E0902 20:37:26.561056       1 goroutinemap.go:150] Operation for "delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]" failed. No retries permitted until 2022-09-02 20:37:27.061015633 +0000 UTC m=+572.123629524 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:37:26.561177       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]]
I0902 20:37:26.561307       1 pv_controller.go:1765] operation "delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]" postponed due to exponential backoff
I0902 20:37:26.561456       1 event.go:291] "Event occurred" object="pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted"
I0902 20:37:27.519610       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.906µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38454" resp=200
I0902 20:37:27.592154       1 gc_controller.go:161] GC'ing orphaned
I0902 20:37:27.592183       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 16 lines ...
I0902 20:37:36.449213       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0902 20:37:37.433624       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:37:37.435770       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:37:37.520205       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="211.321µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58326" resp=200
I0902 20:37:37.546608       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:37:37.546797       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" with version 1875
I0902 20:37:37.546867       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: phase: Failed, bound to: "azuredisk-9241/pvc-7hvdh (uid: 9c72e22a-3384-4c31-9c3b-e71571d1dc1b)", boundByController: true
I0902 20:37:37.546902       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is bound to claim azuredisk-9241/pvc-7hvdh
I0902 20:37:37.546952       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: claim azuredisk-9241/pvc-7hvdh not found
I0902 20:37:37.546969       1 pv_controller.go:1108] reclaimVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: policy is Delete
I0902 20:37:37.547003       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]]
I0902 20:37:37.547047       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b] started
I0902 20:37:37.552425       1 pv_controller.go:1340] isVolumeReleased[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is released
I0902 20:37:37.552646       1 pv_controller.go:1404] doDeleteVolume [pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]
I0902 20:37:37.552855       1 pv_controller.go:1259] deletion of volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) since it's in attaching or detaching state
I0902 20:37:37.553001       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: set phase Failed
I0902 20:37:37.553136       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: phase Failed already set
E0902 20:37:37.553321       1 goroutinemap.go:150] Operation for "delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]" failed. No retries permitted until 2022-09-02 20:37:38.553270407 +0000 UTC m=+583.615884398 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) since it's in attaching or detaching state
I0902 20:37:37.663938       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000001 ReadyCondition updated. Updating timestamp.
I0902 20:37:38.070331       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:37:44.727997       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0902 20:37:45.024617       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 0 items received
I0902 20:37:47.519740       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.206µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56596" resp=200
I0902 20:37:47.593051       1 gc_controller.go:161] GC'ing orphaned
... skipping 2 lines ...
I0902 20:37:49.099475       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b) succeeded
I0902 20:37:49.099488       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b was detached from node:capz-clawep-mp-0000001
I0902 20:37:49.099513       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b") on node "capz-clawep-mp-0000001" 
I0902 20:37:52.436685       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:37:52.546720       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:37:52.546787       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" with version 1875
I0902 20:37:52.546953       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: phase: Failed, bound to: "azuredisk-9241/pvc-7hvdh (uid: 9c72e22a-3384-4c31-9c3b-e71571d1dc1b)", boundByController: true
I0902 20:37:52.546996       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is bound to claim azuredisk-9241/pvc-7hvdh
I0902 20:37:52.547042       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: claim azuredisk-9241/pvc-7hvdh not found
I0902 20:37:52.547054       1 pv_controller.go:1108] reclaimVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: policy is Delete
I0902 20:37:52.547074       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]]
I0902 20:37:52.547123       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b] started
I0902 20:37:52.566597       1 pv_controller.go:1340] isVolumeReleased[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is released
I0902 20:37:52.566622       1 pv_controller.go:1404] doDeleteVolume [pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]
I0902 20:37:57.520200       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="86.208µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41740" resp=200
I0902 20:37:57.755557       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b
I0902 20:37:57.755591       1 pv_controller.go:1435] volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" deleted
I0902 20:37:57.755604       1 pv_controller.go:1283] deleteVolumeOperation [pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: success
I0902 20:37:57.763563       1 pv_protection_controller.go:205] Got event on PV pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b
I0902 20:37:57.763791       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b" with version 1924
I0902 20:37:57.764307       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: phase: Failed, bound to: "azuredisk-9241/pvc-7hvdh (uid: 9c72e22a-3384-4c31-9c3b-e71571d1dc1b)", boundByController: true
I0902 20:37:57.764497       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: volume is bound to claim azuredisk-9241/pvc-7hvdh
I0902 20:37:57.764610       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: claim azuredisk-9241/pvc-7hvdh not found
I0902 20:37:57.764696       1 pv_controller.go:1108] reclaimVolume[pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b]: policy is Delete
I0902 20:37:57.764864       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]]
I0902 20:37:57.765025       1 pv_controller.go:1763] operation "delete-pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b[80fb2065-a93b-4e7f-b1fb-d5a65b84925a]" is already running, skipping
I0902 20:37:57.765321       1 pv_protection_controller.go:125] Processing PV pvc-9c72e22a-3384-4c31-9c3b-e71571d1dc1b
... skipping 110 lines ...
I0902 20:38:06.979493       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-d2stg.171125914f7a3d1a, uid 230b8f5f-ee13-46a3-90b2-dc76a6bb574a, event type delete
I0902 20:38:06.983250       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-d2stg.1711259152e347c4, uid 42b3d7d4-8875-4bcc-b149-0a6f0f5d340e, event type delete
I0902 20:38:06.989776       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name azuredisk-volume-tester-d2stg.171125915a33b6e2, uid ae359758-d11f-4f9a-a91e-100c1864de10, event type delete
I0902 20:38:06.994350       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-7hvdh.1711258c423aadb3, uid c3990a70-e87f-4904-b38d-0fc1480f99c9, event type delete
I0902 20:38:07.005670       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9241, name pvc-7hvdh.1711258cce13d5e5, uid 9016652e-0bcd-4318-9c6d-c847e84f4093, event type delete
I0902 20:38:07.019312       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9241, name default-token-grsmp, uid d03710ac-47ae-4826-91aa-bd703a113043, event type delete
E0902 20:38:07.038302       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9241/default: secrets "default-token-h7ctx" is forbidden: unable to create new content in namespace azuredisk-9241 because it is being terminated
I0902 20:38:07.071522       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9241, name kube-root-ca.crt, uid c107fe71-88bc-464b-8944-f3c4e70fb5d3, event type delete
I0902 20:38:07.076718       1 publisher.go:186] Finished syncing namespace "azuredisk-9241" (4.680603ms)
I0902 20:38:07.083832       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9241/default), service account deleted, removing tokens
I0902 20:38:07.084094       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9241, name default, uid fe53f15d-3848-4a53-a1dc-83a345ac2979, event type delete
I0902 20:38:07.084416       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9241" (2.8µs)
I0902 20:38:07.152994       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9241, estimate: 0, errors: <nil>
... skipping 711 lines ...
I0902 20:39:33.446907       1 pv_controller.go:1752] scheduleOperation[delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]]
I0902 20:39:33.446968       1 pv_controller.go:1763] operation "delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]" is already running, skipping
I0902 20:39:33.446221       1 pv_protection_controller.go:205] Got event on PV pvc-457b987d-0b81-4514-8b17-126b90cc8dd3
I0902 20:39:33.448865       1 pv_controller.go:1340] isVolumeReleased[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is released
I0902 20:39:33.448892       1 pv_controller.go:1404] doDeleteVolume [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]
I0902 20:39:33.453187       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 on node "capz-clawep-mp-0000000"
I0902 20:39:33.473223       1 pv_controller.go:1259] deletion of volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:39:33.473249       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: set phase Failed
I0902 20:39:33.473261       1 pv_controller.go:858] updating PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: set phase Failed
I0902 20:39:33.477275       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" with version 2166
I0902 20:39:33.477315       1 pv_controller.go:879] volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" entered phase "Failed"
I0902 20:39:33.477329       1 pv_controller.go:901] volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
E0902 20:39:33.477607       1 goroutinemap.go:150] Operation for "delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]" failed. No retries permitted until 2022-09-02 20:39:33.977563392 +0000 UTC m=+699.040177283 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:39:33.477849       1 event.go:291] "Event occurred" object="pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
I0902 20:39:33.478248       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" with version 2166
I0902 20:39:33.478471       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase: Failed, bound to: "azuredisk-9336/pvc-xh2jg (uid: 457b987d-0b81-4514-8b17-126b90cc8dd3)", boundByController: true
I0902 20:39:33.478728       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is bound to claim azuredisk-9336/pvc-xh2jg
I0902 20:39:33.478929       1 pv_protection_controller.go:205] Got event on PV pvc-457b987d-0b81-4514-8b17-126b90cc8dd3
I0902 20:39:33.478956       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: claim azuredisk-9336/pvc-xh2jg not found
I0902 20:39:33.479314       1 pv_controller.go:1108] reclaimVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: policy is Delete
I0902 20:39:33.479483       1 pv_controller.go:1752] scheduleOperation[delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]]
I0902 20:39:33.481478       1 pv_controller.go:1765] operation "delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]" postponed due to exponential backoff
... skipping 22 lines ...
I0902 20:39:37.554751       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:39:37.554768       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj found: phase: Bound, bound to: "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc", bindCompleted: true, boundByController: true
I0902 20:39:37.554784       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: all is bound
I0902 20:39:37.554791       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: set phase Bound
I0902 20:39:37.554801       1 pv_controller.go:861] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase Bound already set
I0902 20:39:37.554817       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" with version 2166
I0902 20:39:37.554837       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase: Failed, bound to: "azuredisk-9336/pvc-xh2jg (uid: 457b987d-0b81-4514-8b17-126b90cc8dd3)", boundByController: true
I0902 20:39:37.554856       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is bound to claim azuredisk-9336/pvc-xh2jg
I0902 20:39:37.554877       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: claim azuredisk-9336/pvc-xh2jg not found
I0902 20:39:37.554885       1 pv_controller.go:1108] reclaimVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: policy is Delete
I0902 20:39:37.554899       1 pv_controller.go:1752] scheduleOperation[delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]]
I0902 20:39:37.554923       1 pv_controller.go:1231] deleteVolumeOperation [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3] started
I0902 20:39:37.555120       1 pv_controller.go:861] updating PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase Bound already set
... skipping 19 lines ...
I0902 20:39:37.555419       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-2x4qj] status: phase Bound already set
I0902 20:39:37.555431       1 pv_controller.go:1038] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" bound to claim "azuredisk-9336/pvc-2x4qj"
I0902 20:39:37.555449       1 pv_controller.go:1039] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-2x4qj (uid: c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc)", boundByController: true
I0902 20:39:37.555466       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-2x4qj" status after binding: phase: Bound, bound to: "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc", bindCompleted: true, boundByController: true
I0902 20:39:37.561822       1 pv_controller.go:1340] isVolumeReleased[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is released
I0902 20:39:37.561999       1 pv_controller.go:1404] doDeleteVolume [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]
I0902 20:39:37.607992       1 pv_controller.go:1259] deletion of volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:39:37.608017       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: set phase Failed
I0902 20:39:37.608029       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase Failed already set
E0902 20:39:37.608112       1 goroutinemap.go:150] Operation for "delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]" failed. No retries permitted until 2022-09-02 20:39:38.608070021 +0000 UTC m=+703.670684012 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:39:38.132811       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:39:38.561847       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:39:38.561902       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e to the node "capz-clawep-mp-0000000" mounted true
I0902 20:39:38.561914       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 to the node "capz-clawep-mp-0000000" mounted false
I0902 20:39:38.599109       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e\"}]}}" for node "capz-clawep-mp-0000000" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e 0}]
I0902 20:39:38.599494       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3") on node "capz-clawep-mp-0000000" 
... skipping 30 lines ...
I0902 20:39:52.555329       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:39:52.555344       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj found: phase: Bound, bound to: "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc", bindCompleted: true, boundByController: true
I0902 20:39:52.555363       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: all is bound
I0902 20:39:52.555370       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: set phase Bound
I0902 20:39:52.555393       1 pv_controller.go:861] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase Bound already set
I0902 20:39:52.555406       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" with version 2166
I0902 20:39:52.555423       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase: Failed, bound to: "azuredisk-9336/pvc-xh2jg (uid: 457b987d-0b81-4514-8b17-126b90cc8dd3)", boundByController: true
I0902 20:39:52.555445       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is bound to claim azuredisk-9336/pvc-xh2jg
I0902 20:39:52.555466       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: claim azuredisk-9336/pvc-xh2jg not found
I0902 20:39:52.555474       1 pv_controller.go:1108] reclaimVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: policy is Delete
I0902 20:39:52.555489       1 pv_controller.go:1752] scheduleOperation[delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]]
I0902 20:39:52.555515       1 pv_controller.go:1231] deleteVolumeOperation [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3] started
I0902 20:39:52.556200       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-9336/pvc-tdqf2]: volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" found: phase: Bound, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
... skipping 25 lines ...
I0902 20:39:52.556545       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-2x4qj] status: phase Bound already set
I0902 20:39:52.556555       1 pv_controller.go:1038] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" bound to claim "azuredisk-9336/pvc-2x4qj"
I0902 20:39:52.556569       1 pv_controller.go:1039] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-2x4qj (uid: c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc)", boundByController: true
I0902 20:39:52.556584       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-2x4qj" status after binding: phase: Bound, bound to: "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc", bindCompleted: true, boundByController: true
I0902 20:39:52.565329       1 pv_controller.go:1340] isVolumeReleased[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is released
I0902 20:39:52.565347       1 pv_controller.go:1404] doDeleteVolume [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]
I0902 20:39:52.565504       1 pv_controller.go:1259] deletion of volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) since it's in attaching or detaching state
I0902 20:39:52.565523       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: set phase Failed
I0902 20:39:52.565533       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase Failed already set
E0902 20:39:52.565562       1 goroutinemap.go:150] Operation for "delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]" failed. No retries permitted until 2022-09-02 20:39:54.565542838 +0000 UTC m=+719.628156729 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) since it's in attaching or detaching state
I0902 20:39:53.902880       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) returned with <nil>
I0902 20:39:53.902934       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3) succeeded
I0902 20:39:53.902945       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3 was detached from node:capz-clawep-mp-0000000
I0902 20:39:53.902969       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3") on node "capz-clawep-mp-0000000" 
I0902 20:39:57.520478       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="123.613µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37240" resp=200
I0902 20:40:03.966839       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-clawep-mp-0000000, refreshing the cache
... skipping 25 lines ...
I0902 20:40:07.559577       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:40:07.559716       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj found: phase: Bound, bound to: "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc", bindCompleted: true, boundByController: true
I0902 20:40:07.559744       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: all is bound
I0902 20:40:07.559755       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: set phase Bound
I0902 20:40:07.559767       1 pv_controller.go:861] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase Bound already set
I0902 20:40:07.559789       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" with version 2166
I0902 20:40:07.559849       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase: Failed, bound to: "azuredisk-9336/pvc-xh2jg (uid: 457b987d-0b81-4514-8b17-126b90cc8dd3)", boundByController: true
I0902 20:40:07.559902       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is bound to claim azuredisk-9336/pvc-xh2jg
I0902 20:40:07.559964       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: claim azuredisk-9336/pvc-xh2jg not found
I0902 20:40:07.560020       1 pv_controller.go:1108] reclaimVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: policy is Delete
I0902 20:40:07.558977       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9336/pvc-tdqf2] status: set phase Bound
I0902 20:40:07.560235       1 pv_controller.go:1752] scheduleOperation[delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]]
I0902 20:40:07.560361       1 pv_controller.go:1231] deleteVolumeOperation [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3] started
... skipping 25 lines ...
I0902 20:40:12.733175       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3
I0902 20:40:12.733207       1 pv_controller.go:1435] volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" deleted
I0902 20:40:12.733220       1 pv_controller.go:1283] deleteVolumeOperation [pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: success
I0902 20:40:12.750858       1 pv_protection_controller.go:205] Got event on PV pvc-457b987d-0b81-4514-8b17-126b90cc8dd3
I0902 20:40:12.750894       1 pv_protection_controller.go:125] Processing PV pvc-457b987d-0b81-4514-8b17-126b90cc8dd3
I0902 20:40:12.751293       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-457b987d-0b81-4514-8b17-126b90cc8dd3" with version 2226
I0902 20:40:12.751334       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: phase: Failed, bound to: "azuredisk-9336/pvc-xh2jg (uid: 457b987d-0b81-4514-8b17-126b90cc8dd3)", boundByController: true
I0902 20:40:12.751908       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: volume is bound to claim azuredisk-9336/pvc-xh2jg
I0902 20:40:12.751949       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: claim azuredisk-9336/pvc-xh2jg not found
I0902 20:40:12.752060       1 pv_controller.go:1108] reclaimVolume[pvc-457b987d-0b81-4514-8b17-126b90cc8dd3]: policy is Delete
I0902 20:40:12.752083       1 pv_controller.go:1752] scheduleOperation[delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]]
I0902 20:40:12.752091       1 pv_controller.go:1763] operation "delete-pvc-457b987d-0b81-4514-8b17-126b90cc8dd3[f6c87191-10c8-4bdc-9a3e-32f467a408b6]" is already running, skipping
I0902 20:40:12.760781       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-457b987d-0b81-4514-8b17-126b90cc8dd3
... skipping 194 lines ...
I0902 20:40:48.601719       1 pv_controller.go:1108] reclaimVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: policy is Delete
I0902 20:40:48.601490       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc] started
I0902 20:40:48.601817       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]]
I0902 20:40:48.602269       1 pv_controller.go:1763] operation "delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]" is already running, skipping
I0902 20:40:48.603867       1 pv_controller.go:1340] isVolumeReleased[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is released
I0902 20:40:48.604046       1 pv_controller.go:1404] doDeleteVolume [pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]
I0902 20:40:48.630888       1 pv_controller.go:1259] deletion of volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:40:48.630915       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: set phase Failed
I0902 20:40:48.630924       1 pv_controller.go:858] updating PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: set phase Failed
I0902 20:40:48.633651       1 pv_protection_controller.go:205] Got event on PV pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc
I0902 20:40:48.634061       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" with version 2292
I0902 20:40:48.634260       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase: Failed, bound to: "azuredisk-9336/pvc-2x4qj (uid: c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc)", boundByController: true
I0902 20:40:48.634387       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:40:48.634526       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj not found
I0902 20:40:48.634642       1 pv_controller.go:1108] reclaimVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: policy is Delete
I0902 20:40:48.634770       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]]
I0902 20:40:48.634879       1 pv_controller.go:1763] operation "delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]" is already running, skipping
I0902 20:40:48.635182       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" with version 2292
I0902 20:40:48.635336       1 pv_controller.go:879] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" entered phase "Failed"
I0902 20:40:48.635465       1 pv_controller.go:901] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
E0902 20:40:48.635611       1 goroutinemap.go:150] Operation for "delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]" failed. No retries permitted until 2022-09-02 20:40:49.135588526 +0000 UTC m=+774.198202417 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:40:48.636169       1 event.go:291] "Event occurred" object="pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted"
I0902 20:40:48.655205       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc on node "capz-clawep-mp-0000001"
I0902 20:40:51.418025       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 590 items received
I0902 20:40:52.444719       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:40:52.560069       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:40:52.560246       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" with version 1950
I0902 20:40:52.560360       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase: Bound, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
I0902 20:40:52.560490       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is bound to claim azuredisk-9336/pvc-tdqf2
I0902 20:40:52.560618       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: claim azuredisk-9336/pvc-tdqf2 found: phase: Bound, bound to: "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e", bindCompleted: true, boundByController: true
I0902 20:40:52.560696       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: all is bound
I0902 20:40:52.560727       1 pv_controller.go:858] updating PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: set phase Bound
I0902 20:40:52.560756       1 pv_controller.go:861] updating PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase Bound already set
I0902 20:40:52.560830       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" with version 2292
I0902 20:40:52.560890       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase: Failed, bound to: "azuredisk-9336/pvc-2x4qj (uid: c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc)", boundByController: true
I0902 20:40:52.560989       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:40:52.561013       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj not found
I0902 20:40:52.561021       1 pv_controller.go:1108] reclaimVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: policy is Delete
I0902 20:40:52.561038       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]]
I0902 20:40:52.561064       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc] started
I0902 20:40:52.560534       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-tdqf2" with version 1953
... skipping 11 lines ...
I0902 20:40:52.561631       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9336/pvc-tdqf2] status: phase Bound already set
I0902 20:40:52.561672       1 pv_controller.go:1038] volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" bound to claim "azuredisk-9336/pvc-tdqf2"
I0902 20:40:52.561925       1 pv_controller.go:1039] volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" status after binding: phase: Bound, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
I0902 20:40:52.562134       1 pv_controller.go:1040] claim "azuredisk-9336/pvc-tdqf2" status after binding: phase: Bound, bound to: "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e", bindCompleted: true, boundByController: true
I0902 20:40:52.565724       1 pv_controller.go:1340] isVolumeReleased[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is released
I0902 20:40:52.565744       1 pv_controller.go:1404] doDeleteVolume [pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]
I0902 20:40:52.590109       1 pv_controller.go:1259] deletion of volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:40:52.590137       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: set phase Failed
I0902 20:40:52.590147       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase Failed already set
E0902 20:40:52.590201       1 goroutinemap.go:150] Operation for "delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]" failed. No retries permitted until 2022-09-02 20:40:53.590182201 +0000 UTC m=+778.652796092 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:40:53.835590       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000001"
I0902 20:40:53.836500       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc to the node "capz-clawep-mp-0000001" mounted false
I0902 20:40:53.896622       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000001"
I0902 20:40:53.896658       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc to the node "capz-clawep-mp-0000001" mounted false
I0902 20:40:53.897927       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-clawep-mp-0000001" succeeded. VolumesAttached: []
I0902 20:40:53.898268       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc") on node "capz-clawep-mp-0000001" 
... skipping 18 lines ...
I0902 20:41:07.560840       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is bound to claim azuredisk-9336/pvc-tdqf2
I0902 20:41:07.560877       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: claim azuredisk-9336/pvc-tdqf2 found: phase: Bound, bound to: "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e", bindCompleted: true, boundByController: true
I0902 20:41:07.560896       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: all is bound
I0902 20:41:07.560910       1 pv_controller.go:858] updating PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: set phase Bound
I0902 20:41:07.560921       1 pv_controller.go:861] updating PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase Bound already set
I0902 20:41:07.560961       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" with version 2292
I0902 20:41:07.560986       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase: Failed, bound to: "azuredisk-9336/pvc-2x4qj (uid: c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc)", boundByController: true
I0902 20:41:07.561026       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:41:07.561052       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj not found
I0902 20:41:07.561064       1 pv_controller.go:1108] reclaimVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: policy is Delete
I0902 20:41:07.561101       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]]
I0902 20:41:07.561243       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc] started
I0902 20:41:07.561449       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9336/pvc-tdqf2" with version 1953
... skipping 21 lines ...
I0902 20:41:12.732206       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc
I0902 20:41:12.732439       1 pv_controller.go:1435] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" deleted
I0902 20:41:12.732459       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: success
I0902 20:41:12.741056       1 pv_protection_controller.go:205] Got event on PV pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc
I0902 20:41:12.741090       1 pv_protection_controller.go:125] Processing PV pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc
I0902 20:41:12.741414       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" with version 2330
I0902 20:41:12.741455       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: phase: Failed, bound to: "azuredisk-9336/pvc-2x4qj (uid: c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc)", boundByController: true
I0902 20:41:12.741484       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: volume is bound to claim azuredisk-9336/pvc-2x4qj
I0902 20:41:12.741513       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: claim azuredisk-9336/pvc-2x4qj not found
I0902 20:41:12.741527       1 pv_controller.go:1108] reclaimVolume[pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc]: policy is Delete
I0902 20:41:12.741545       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]]
I0902 20:41:12.741558       1 pv_controller.go:1763] operation "delete-pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc[6fc3a8ce-11cf-43d2-8efe-9aa674691f3f]" is already running, skipping
I0902 20:41:12.745703       1 pv_controller_base.go:235] volume "pvc-c0d87a5e-bc0e-4afc-a6ed-8b8b57ce81dc" deleted
... skipping 151 lines ...
I0902 20:41:48.708276       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]]
I0902 20:41:48.708283       1 pv_controller.go:1763] operation "delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]" is already running, skipping
I0902 20:41:48.708314       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e] started
I0902 20:41:48.708069       1 pv_protection_controller.go:205] Got event on PV pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e
I0902 20:41:48.709830       1 pv_controller.go:1340] isVolumeReleased[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is released
I0902 20:41:48.709848       1 pv_controller.go:1404] doDeleteVolume [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]
I0902 20:41:48.750599       1 pv_controller.go:1259] deletion of volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:41:48.750622       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: set phase Failed
I0902 20:41:48.750631       1 pv_controller.go:858] updating PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: set phase Failed
I0902 20:41:48.753365       1 pv_protection_controller.go:205] Got event on PV pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e
I0902 20:41:48.753562       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" with version 2394
I0902 20:41:48.754012       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase: Failed, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
I0902 20:41:48.754166       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is bound to claim azuredisk-9336/pvc-tdqf2
I0902 20:41:48.754298       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: claim azuredisk-9336/pvc-tdqf2 not found
I0902 20:41:48.754412       1 pv_controller.go:1108] reclaimVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: policy is Delete
I0902 20:41:48.754516       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]]
I0902 20:41:48.754635       1 pv_controller.go:1763] operation "delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]" is already running, skipping
I0902 20:41:48.754978       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" with version 2394
I0902 20:41:48.755006       1 pv_controller.go:879] volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" entered phase "Failed"
I0902 20:41:48.755016       1 pv_controller.go:901] volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
E0902 20:41:48.755055       1 goroutinemap.go:150] Operation for "delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]" failed. No retries permitted until 2022-09-02 20:41:49.255037085 +0000 UTC m=+834.317651076 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:41:48.755352       1 event.go:291] "Event occurred" object="pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
I0902 20:41:48.775178       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e on node "capz-clawep-mp-0000000"
I0902 20:41:48.784693       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-clawep-mp-0000000" succeeded. VolumesAttached: []
I0902 20:41:48.784956       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e") on node "capz-clawep-mp-0000000" 
I0902 20:41:48.785115       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:41:48.785222       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e to the node "capz-clawep-mp-0000000" mounted false
... skipping 2 lines ...
I0902 20:41:48.788106       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e"
I0902 20:41:48.788300       1 azure_controller_vmss.go:175] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e)
I0902 20:41:52.199506       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0902 20:41:52.447475       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:41:52.562817       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:41:52.562965       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" with version 2394
I0902 20:41:52.563087       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase: Failed, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
I0902 20:41:52.563209       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is bound to claim azuredisk-9336/pvc-tdqf2
I0902 20:41:52.563264       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: claim azuredisk-9336/pvc-tdqf2 not found
I0902 20:41:52.563275       1 pv_controller.go:1108] reclaimVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: policy is Delete
I0902 20:41:52.563299       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]]
I0902 20:41:52.563382       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e] started
I0902 20:41:52.566410       1 pv_controller.go:1340] isVolumeReleased[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is released
I0902 20:41:52.566429       1 pv_controller.go:1404] doDeleteVolume [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]
I0902 20:41:52.566602       1 pv_controller.go:1259] deletion of volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e) since it's in attaching or detaching state
I0902 20:41:52.566627       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: set phase Failed
I0902 20:41:52.566637       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase Failed already set
E0902 20:41:52.566736       1 goroutinemap.go:150] Operation for "delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]" failed. No retries permitted until 2022-09-02 20:41:53.566713522 +0000 UTC m=+838.629327413 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e) since it's in attaching or detaching state
I0902 20:41:52.705280       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000000 ReadyCondition updated. Updating timestamp.
I0902 20:41:55.780197       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 0 items received
I0902 20:41:56.427163       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0902 20:41:57.520568       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="76.708µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56514" resp=200
I0902 20:42:02.024363       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 37 items received
I0902 20:42:03.482308       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 59 items received
... skipping 4 lines ...
I0902 20:42:04.419590       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 83 items received
I0902 20:42:07.440058       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:42:07.448296       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:42:07.520121       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="56.806µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54032" resp=200
I0902 20:42:07.564009       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:42:07.564491       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" with version 2394
I0902 20:42:07.564581       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase: Failed, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
I0902 20:42:07.564649       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is bound to claim azuredisk-9336/pvc-tdqf2
I0902 20:42:07.564742       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: claim azuredisk-9336/pvc-tdqf2 not found
I0902 20:42:07.564759       1 pv_controller.go:1108] reclaimVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: policy is Delete
I0902 20:42:07.564777       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]]
I0902 20:42:07.564834       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e] started
I0902 20:42:07.569296       1 pv_controller.go:1340] isVolumeReleased[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is released
... skipping 5 lines ...
I0902 20:42:12.737802       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e
I0902 20:42:12.737874       1 pv_controller.go:1435] volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" deleted
I0902 20:42:12.737904       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: success
I0902 20:42:12.748920       1 pv_protection_controller.go:205] Got event on PV pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e
I0902 20:42:12.749153       1 pv_protection_controller.go:125] Processing PV pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e
I0902 20:42:12.749655       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" with version 2430
I0902 20:42:12.750119       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: phase: Failed, bound to: "azuredisk-9336/pvc-tdqf2 (uid: 4421f27a-2ba2-409c-93a4-5357a9f9056e)", boundByController: true
I0902 20:42:12.750157       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: volume is bound to claim azuredisk-9336/pvc-tdqf2
I0902 20:42:12.750179       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: claim azuredisk-9336/pvc-tdqf2 not found
I0902 20:42:12.750223       1 pv_controller.go:1108] reclaimVolume[pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e]: policy is Delete
I0902 20:42:12.750308       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e[07b5fc0b-2701-4553-972e-683bb1b74bc6]]
I0902 20:42:12.750418       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e] started
I0902 20:42:12.755307       1 pv_controller.go:1243] Volume "pvc-4421f27a-2ba2-409c-93a4-5357a9f9056e" is already being deleted
... skipping 33 lines ...
I0902 20:42:13.940504       1 pvc_protection_controller.go:159] "Finished processing PVC" PVC="azuredisk-2205/pvc-tpzm8" duration="5.8µs"
I0902 20:42:13.940988       1 controller_utils.go:581] Controller azuredisk-volume-tester-d9mcn-68646f56fd created pod azuredisk-volume-tester-d9mcn-68646f56fd-cxs5z
I0902 20:42:13.941076       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0902 20:42:13.941374       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-d9mcn-68646f56fd-cxs5z"
I0902 20:42:13.950989       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd"
I0902 20:42:13.952177       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn" duration="33.420387ms"
I0902 20:42:13.952414       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-d9mcn\": the object has been modified; please apply your changes to the latest version and try again"
I0902 20:42:13.952658       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn" startTime="2022-09-02 20:42:13.952631676 +0000 UTC m=+859.015245567"
I0902 20:42:13.953256       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-d9mcn" timed out (false) [last progress check: 2022-09-02 20:42:13 +0000 UTC - now: 2022-09-02 20:42:13.953249339 +0000 UTC m=+859.015863230]
I0902 20:42:13.954179       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd" (21.41647ms)
I0902 20:42:13.954221       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd", timestamp:time.Time{wall:0xc0bcb88977995a8a, ext:858995410957, loc:(*time.Location)(0x751a1a0)}}
I0902 20:42:13.954286       1 replica_set_utils.go:59] Updating status for : azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0902 20:42:13.957313       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2205/pvc-tpzm8" with version 2452
... skipping 235 lines ...
I0902 20:42:32.871934       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn"
I0902 20:42:32.872044       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn" startTime="2022-09-02 20:42:32.872021932 +0000 UTC m=+877.934635823"
I0902 20:42:32.872376       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd"
I0902 20:42:32.872699       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-d9mcn" for a progress check after 597s
I0902 20:42:32.872814       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn" duration="757.977µs"
I0902 20:42:32.872921       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-2205/azuredisk-volume-tester-d9mcn" startTime="2022-09-02 20:42:32.872899621 +0000 UTC m=+877.935513612"
W0902 20:42:32.875643       1 reconciler.go:385] Multi-Attach error for volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45") from node "capz-clawep-mp-0000000" Volume is already used by pods azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd-cxs5z on node capz-clawep-mp-0000001
I0902 20:42:32.876051       1 event.go:291] "Event occurred" object="azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45\" Volume is already used by pod(s) azuredisk-volume-tester-d9mcn-68646f56fd-cxs5z"
I0902 20:42:32.879780       1 replica_set.go:443] Pod azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr updated, objectMeta {Name:azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr GenerateName:azuredisk-volume-tester-d9mcn-68646f56fd- Namespace:azuredisk-2205 SelfLink: UID:285348a2-2a7f-4c95-88c3-b301098fdb9e ResourceVersion:2533 Generation:0 CreationTimestamp:2022-09-02 20:42:32 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-7660323324116104765 pod-template-hash:68646f56fd] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-d9mcn-68646f56fd UID:eb053990-47b9-41a6-b75a-d9e3198e6187 Controller:0xc001f3ce4e BlockOwnerDeletion:0xc001f3ce4f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 20:42:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eb053990-47b9-41a6-b75a-d9e3198e6187\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr GenerateName:azuredisk-volume-tester-d9mcn-68646f56fd- Namespace:azuredisk-2205 SelfLink: UID:285348a2-2a7f-4c95-88c3-b301098fdb9e ResourceVersion:2539 Generation:0 CreationTimestamp:2022-09-02 20:42:32 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-7660323324116104765 pod-template-hash:68646f56fd] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-d9mcn-68646f56fd UID:eb053990-47b9-41a6-b75a-d9e3198e6187 Controller:0xc0024c12e7 BlockOwnerDeletion:0xc0024c12e8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-02 20:42:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"eb053990-47b9-41a6-b75a-d9e3198e6187\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-02 20:42:32 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0902 20:42:32.880409       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr"
I0902 20:42:32.880854       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr, PodDisruptionBudget controller will avoid syncing.
I0902 20:42:32.881014       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-d9mcn-68646f56fd-n5dvr"
I0902 20:42:32.882794       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd" (24.397484ms)
I0902 20:42:32.882987       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-2205/azuredisk-volume-tester-d9mcn-68646f56fd", timestamp:time.Time{wall:0xc0bcb88e31e8f592, ext:877899964793, loc:(*time.Location)(0x751a1a0)}}
... skipping 431 lines ...
I0902 20:44:16.014002       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]]
I0902 20:44:16.014067       1 pv_controller.go:1763] operation "delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]" is already running, skipping
I0902 20:44:16.014180       1 pv_controller.go:1231] deleteVolumeOperation [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45] started
I0902 20:44:16.015954       1 pv_controller.go:1340] isVolumeReleased[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is released
I0902 20:44:16.015973       1 pv_controller.go:1404] doDeleteVolume [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]
I0902 20:44:16.067389       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 on node "capz-clawep-mp-0000000"
I0902 20:44:16.164883       1 pv_controller.go:1259] deletion of volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:44:16.164912       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: set phase Failed
I0902 20:44:16.164924       1 pv_controller.go:858] updating PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: set phase Failed
I0902 20:44:16.169198       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" with version 2729
I0902 20:44:16.169481       1 pv_controller.go:879] volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" entered phase "Failed"
I0902 20:44:16.169707       1 pv_controller.go:901] volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
E0902 20:44:16.169901       1 goroutinemap.go:150] Operation for "delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]" failed. No retries permitted until 2022-09-02 20:44:16.669876686 +0000 UTC m=+981.732490677 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:44:16.170093       1 pv_protection_controller.go:205] Got event on PV pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45
I0902 20:44:16.170122       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" with version 2729
I0902 20:44:16.170766       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: phase: Failed, bound to: "azuredisk-2205/pvc-tpzm8 (uid: cf0575c5-c12b-4a4e-92fd-d0fd03c87a45)", boundByController: true
I0902 20:44:16.170810       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is bound to claim azuredisk-2205/pvc-tpzm8
I0902 20:44:16.170837       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: claim azuredisk-2205/pvc-tpzm8 not found
I0902 20:44:16.170853       1 pv_controller.go:1108] reclaimVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: policy is Delete
I0902 20:44:16.170872       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]]
I0902 20:44:16.170891       1 pv_controller.go:1765] operation "delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]" postponed due to exponential backoff
I0902 20:44:16.170212       1 event.go:291] "Event occurred" object="pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
... skipping 9 lines ...
I0902 20:44:18.914407       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45 from node "capz-clawep-mp-0000000"
I0902 20:44:18.914569       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45"
I0902 20:44:18.914724       1 azure_controller_vmss.go:175] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45)
I0902 20:44:22.455204       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:44:22.571907       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:44:22.571957       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" with version 2729
I0902 20:44:22.572135       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: phase: Failed, bound to: "azuredisk-2205/pvc-tpzm8 (uid: cf0575c5-c12b-4a4e-92fd-d0fd03c87a45)", boundByController: true
I0902 20:44:22.572214       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is bound to claim azuredisk-2205/pvc-tpzm8
I0902 20:44:22.572271       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: claim azuredisk-2205/pvc-tpzm8 not found
I0902 20:44:22.572286       1 pv_controller.go:1108] reclaimVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: policy is Delete
I0902 20:44:22.572357       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]]
I0902 20:44:22.572434       1 pv_controller.go:1231] deleteVolumeOperation [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45] started
I0902 20:44:22.579343       1 pv_controller.go:1340] isVolumeReleased[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is released
I0902 20:44:22.579364       1 pv_controller.go:1404] doDeleteVolume [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]
I0902 20:44:22.579467       1 pv_controller.go:1259] deletion of volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45) since it's in attaching or detaching state
I0902 20:44:22.579549       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: set phase Failed
I0902 20:44:22.579630       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: phase Failed already set
E0902 20:44:22.579728       1 goroutinemap.go:150] Operation for "delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]" failed. No retries permitted until 2022-09-02 20:44:23.579705076 +0000 UTC m=+988.642319067 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45) since it's in attaching or detaching state
I0902 20:44:22.727503       1 node_lifecycle_controller.go:1047] Node capz-clawep-mp-0000000 ReadyCondition updated. Updating timestamp.
I0902 20:44:22.727631       1 node_lifecycle_controller.go:1047] Node capz-clawep-control-plane-65svl ReadyCondition updated. Updating timestamp.
I0902 20:44:23.851324       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-fhmwd2" (15.101µs)
I0902 20:44:27.520457       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="91.309µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44760" resp=200
I0902 20:44:27.607305       1 gc_controller.go:161] GC'ing orphaned
I0902 20:44:27.607335       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
... skipping 5 lines ...
I0902 20:44:34.040610       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-fhmwd2" (13.101µs)
I0902 20:44:37.444838       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:44:37.456166       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:44:37.520256       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="202.821µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57798" resp=200
I0902 20:44:37.572307       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:44:37.572590       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" with version 2729
I0902 20:44:37.572789       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: phase: Failed, bound to: "azuredisk-2205/pvc-tpzm8 (uid: cf0575c5-c12b-4a4e-92fd-d0fd03c87a45)", boundByController: true
I0902 20:44:37.572834       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is bound to claim azuredisk-2205/pvc-tpzm8
I0902 20:44:37.572884       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: claim azuredisk-2205/pvc-tpzm8 not found
I0902 20:44:37.573002       1 pv_controller.go:1108] reclaimVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: policy is Delete
I0902 20:44:37.573028       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]]
I0902 20:44:37.573120       1 pv_controller.go:1231] deleteVolumeOperation [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45] started
I0902 20:44:37.577989       1 pv_controller.go:1340] isVolumeReleased[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is released
... skipping 2 lines ...
I0902 20:44:42.752557       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45
I0902 20:44:42.752602       1 pv_controller.go:1435] volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" deleted
I0902 20:44:42.752618       1 pv_controller.go:1283] deleteVolumeOperation [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: success
I0902 20:44:42.763587       1 pv_protection_controller.go:205] Got event on PV pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45
I0902 20:44:42.764243       1 pv_protection_controller.go:125] Processing PV pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45
I0902 20:44:42.764858       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" with version 2773
I0902 20:44:42.765162       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: phase: Failed, bound to: "azuredisk-2205/pvc-tpzm8 (uid: cf0575c5-c12b-4a4e-92fd-d0fd03c87a45)", boundByController: true
I0902 20:44:42.765408       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: volume is bound to claim azuredisk-2205/pvc-tpzm8
I0902 20:44:42.765580       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: claim azuredisk-2205/pvc-tpzm8 not found
I0902 20:44:42.765738       1 pv_controller.go:1108] reclaimVolume[pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45]: policy is Delete
I0902 20:44:42.765904       1 pv_controller.go:1752] scheduleOperation[delete-pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45[56bbe72e-c0c7-4910-b447-fce37c6c8fd9]]
I0902 20:44:42.766103       1 pv_controller.go:1231] deleteVolumeOperation [pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45] started
I0902 20:44:42.770911       1 pv_controller.go:1243] Volume "pvc-cf0575c5-c12b-4a4e-92fd-d0fd03c87a45" is already being deleted
... skipping 393 lines ...
I0902 20:44:59.292191       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-k8px2] status: phase Bound already set
I0902 20:44:59.292342       1 pv_controller.go:1038] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" bound to claim "azuredisk-1387/pvc-k8px2"
I0902 20:44:59.292474       1 pv_controller.go:1039] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-k8px2 (uid: 73ac796c-4419-4396-8862-a15fcb581aae)", boundByController: true
I0902 20:44:59.292611       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-k8px2" status after binding: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:44:59.309337       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0902 20:44:59.334072       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-9gzqw, uid 829bbb06-f44c-49d8-9d8e-a4c786b85d5b, event type delete
E0902 20:44:59.347725       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-lz7w6" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0902 20:44:59.366512       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0902 20:44:59.366701       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid f1b1c64c-1dcc-4b6b-9fec-75ec3bbdc5f5, event type delete
I0902 20:44:59.366719       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (2.4µs)
I0902 20:44:59.388689       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3410, name pvc-qqrrp.171125f8a6b8bcce, uid 276c55db-03c1-4028-93f7-125c1f293cb9, event type delete
I0902 20:44:59.449910       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3410, name kube-root-ca.crt, uid 8ea1d79e-7a4f-4a45-a36f-ca8fefec587b, event type delete
I0902 20:44:59.457598       1 publisher.go:186] Finished syncing namespace "azuredisk-3410" (7.627573ms)
... skipping 464 lines ...
I0902 20:45:37.744390       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:45:37.744420       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:45:37.744463       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:45:37.744493       1 pv_controller.go:1763] operation "delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]" is already running, skipping
I0902 20:45:37.746325       1 pv_controller.go:1340] isVolumeReleased[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is released
I0902 20:45:37.746343       1 pv_controller.go:1404] doDeleteVolume [pvc-9571446a-3972-4090-a129-b8142098251d]
I0902 20:45:37.906347       1 pv_controller.go:1259] deletion of volume "pvc-9571446a-3972-4090-a129-b8142098251d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:45:37.906369       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: set phase Failed
I0902 20:45:37.906380       1 pv_controller.go:858] updating PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: set phase Failed
I0902 20:45:37.910829       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 2997
I0902 20:45:37.910860       1 pv_controller.go:879] volume "pvc-9571446a-3972-4090-a129-b8142098251d" entered phase "Failed"
I0902 20:45:37.910871       1 pv_controller.go:901] volume "pvc-9571446a-3972-4090-a129-b8142098251d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
E0902 20:45:37.910910       1 goroutinemap.go:150] Operation for "delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]" failed. No retries permitted until 2022-09-02 20:45:38.410891385 +0000 UTC m=+1063.473505276 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:45:37.911180       1 event.go:291] "Event occurred" object="pvc-9571446a-3972-4090-a129-b8142098251d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted"
I0902 20:45:37.911378       1 pv_protection_controller.go:205] Got event on PV pvc-9571446a-3972-4090-a129-b8142098251d
I0902 20:45:37.911411       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 2997
I0902 20:45:37.911435       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: phase: Failed, bound to: "azuredisk-1387/pvc-tk6zc (uid: 9571446a-3972-4090-a129-b8142098251d)", boundByController: true
I0902 20:45:37.911462       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is bound to claim azuredisk-1387/pvc-tk6zc
I0902 20:45:37.911482       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:45:37.911491       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:45:37.911507       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:45:37.911518       1 pv_controller.go:1765] operation "delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]" postponed due to exponential backoff
I0902 20:45:38.321390       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
... skipping 52 lines ...
I0902 20:45:52.578460       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: volume is bound to claim azuredisk-1387/pvc-k8px2
I0902 20:45:52.578791       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: claim azuredisk-1387/pvc-k8px2 found: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:45:52.578813       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: all is bound
I0902 20:45:52.578835       1 pv_controller.go:858] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: set phase Bound
I0902 20:45:52.578848       1 pv_controller.go:861] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: phase Bound already set
I0902 20:45:52.578866       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 2997
I0902 20:45:52.578890       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: phase: Failed, bound to: "azuredisk-1387/pvc-tk6zc (uid: 9571446a-3972-4090-a129-b8142098251d)", boundByController: true
I0902 20:45:52.578921       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is bound to claim azuredisk-1387/pvc-tk6zc
I0902 20:45:52.578945       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:45:52.578959       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:45:52.578974       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:45:52.579003       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9571446a-3972-4090-a129-b8142098251d] started
I0902 20:45:52.579415       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-1387/pvc-b8pcm]: volume "pvc-aec977c6-4d65-44da-b908-eb1a60fc23fd" found: phase: Bound, bound to: "azuredisk-1387/pvc-b8pcm (uid: aec977c6-4d65-44da-b908-eb1a60fc23fd)", boundByController: true
... skipping 25 lines ...
I0902 20:45:52.582602       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-k8px2] status: phase Bound already set
I0902 20:45:52.582920       1 pv_controller.go:1038] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" bound to claim "azuredisk-1387/pvc-k8px2"
I0902 20:45:52.583057       1 pv_controller.go:1039] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-k8px2 (uid: 73ac796c-4419-4396-8862-a15fcb581aae)", boundByController: true
I0902 20:45:52.583189       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-k8px2" status after binding: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:45:52.599759       1 pv_controller.go:1340] isVolumeReleased[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is released
I0902 20:45:52.599819       1 pv_controller.go:1404] doDeleteVolume [pvc-9571446a-3972-4090-a129-b8142098251d]
I0902 20:45:52.630660       1 pv_controller.go:1259] deletion of volume "pvc-9571446a-3972-4090-a129-b8142098251d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:45:52.630681       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: set phase Failed
I0902 20:45:52.630692       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: phase Failed already set
E0902 20:45:52.630722       1 goroutinemap.go:150] Operation for "delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]" failed. No retries permitted until 2022-09-02 20:45:53.630700933 +0000 UTC m=+1078.693314924 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:45:55.657019       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0902 20:45:57.519329       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="61.606µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58818" resp=200
I0902 20:45:59.024110       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 36 items received
I0902 20:45:59.561047       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-aec977c6-4d65-44da-b908-eb1a60fc23fd) returned with <nil>
I0902 20:45:59.561468       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-aec977c6-4d65-44da-b908-eb1a60fc23fd) succeeded
I0902 20:45:59.561774       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-aec977c6-4d65-44da-b908-eb1a60fc23fd was detached from node:capz-clawep-mp-0000001
... skipping 19 lines ...
I0902 20:46:07.578407       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: volume is bound to claim azuredisk-1387/pvc-k8px2
I0902 20:46:07.578426       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: claim azuredisk-1387/pvc-k8px2 found: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:46:07.578441       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: all is bound
I0902 20:46:07.578450       1 pv_controller.go:858] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: set phase Bound
I0902 20:46:07.578497       1 pv_controller.go:861] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: phase Bound already set
I0902 20:46:07.578535       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 2997
I0902 20:46:07.578589       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: phase: Failed, bound to: "azuredisk-1387/pvc-tk6zc (uid: 9571446a-3972-4090-a129-b8142098251d)", boundByController: true
I0902 20:46:07.578614       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is bound to claim azuredisk-1387/pvc-tk6zc
I0902 20:46:07.578640       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:46:07.578684       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:46:07.578720       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:46:07.578779       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9571446a-3972-4090-a129-b8142098251d] started
I0902 20:46:07.579120       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-b8pcm" with version 2884
... skipping 29 lines ...
I0902 20:46:07.580129       1 pv_controller.go:1039] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-k8px2 (uid: 73ac796c-4419-4396-8862-a15fcb581aae)", boundByController: true
I0902 20:46:07.580147       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-k8px2" status after binding: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:46:07.586014       1 pv_controller.go:1340] isVolumeReleased[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is released
I0902 20:46:07.586340       1 pv_controller.go:1404] doDeleteVolume [pvc-9571446a-3972-4090-a129-b8142098251d]
I0902 20:46:07.610125       1 gc_controller.go:161] GC'ing orphaned
I0902 20:46:07.610154       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:46:07.635946       1 pv_controller.go:1259] deletion of volume "pvc-9571446a-3972-4090-a129-b8142098251d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:46:07.635968       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: set phase Failed
I0902 20:46:07.635978       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: phase Failed already set
E0902 20:46:07.636006       1 goroutinemap.go:150] Operation for "delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]" failed. No retries permitted until 2022-09-02 20:46:09.635986811 +0000 UTC m=+1094.698600702 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_1), could not be deleted
I0902 20:46:08.341450       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:46:08.439020       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 7 items received
I0902 20:46:14.408514       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.DaemonSet total 0 items received
I0902 20:46:14.441351       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StorageClass total 17 items received
I0902 20:46:14.940319       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-73ac796c-4419-4396-8862-a15fcb581aae) returned with <nil>
I0902 20:46:14.940638       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-73ac796c-4419-4396-8862-a15fcb581aae) succeeded
... skipping 19 lines ...
I0902 20:46:22.579542       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: volume is bound to claim azuredisk-1387/pvc-k8px2
I0902 20:46:22.579565       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: claim azuredisk-1387/pvc-k8px2 found: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:46:22.579620       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: all is bound
I0902 20:46:22.579636       1 pv_controller.go:858] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: set phase Bound
I0902 20:46:22.579648       1 pv_controller.go:861] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: phase Bound already set
I0902 20:46:22.579668       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 2997
I0902 20:46:22.579716       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: phase: Failed, bound to: "azuredisk-1387/pvc-tk6zc (uid: 9571446a-3972-4090-a129-b8142098251d)", boundByController: true
I0902 20:46:22.579749       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is bound to claim azuredisk-1387/pvc-tk6zc
I0902 20:46:22.579796       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:46:22.579813       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:46:22.579869       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:46:22.579929       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9571446a-3972-4090-a129-b8142098251d] started
I0902 20:46:22.580044       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1387/pvc-b8pcm" with version 2884
... skipping 27 lines ...
I0902 20:46:22.580439       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1387/pvc-k8px2] status: phase Bound already set
I0902 20:46:22.580450       1 pv_controller.go:1038] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" bound to claim "azuredisk-1387/pvc-k8px2"
I0902 20:46:22.580467       1 pv_controller.go:1039] volume "pvc-73ac796c-4419-4396-8862-a15fcb581aae" status after binding: phase: Bound, bound to: "azuredisk-1387/pvc-k8px2 (uid: 73ac796c-4419-4396-8862-a15fcb581aae)", boundByController: true
I0902 20:46:22.580481       1 pv_controller.go:1040] claim "azuredisk-1387/pvc-k8px2" status after binding: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:46:22.587686       1 pv_controller.go:1340] isVolumeReleased[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is released
I0902 20:46:22.587706       1 pv_controller.go:1404] doDeleteVolume [pvc-9571446a-3972-4090-a129-b8142098251d]
I0902 20:46:22.587843       1 pv_controller.go:1259] deletion of volume "pvc-9571446a-3972-4090-a129-b8142098251d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) since it's in attaching or detaching state
I0902 20:46:22.587863       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: set phase Failed
I0902 20:46:22.587876       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9571446a-3972-4090-a129-b8142098251d]: phase Failed already set
E0902 20:46:22.587935       1 goroutinemap.go:150] Operation for "delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]" failed. No retries permitted until 2022-09-02 20:46:26.587886052 +0000 UTC m=+1111.650499943 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d) since it's in attaching or detaching state
I0902 20:46:24.325988       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0902 20:46:27.481536       1 controller.go:272] Triggering nodeSync
I0902 20:46:27.481596       1 controller.go:291] nodeSync has been triggered
I0902 20:46:27.481617       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0902 20:46:27.481638       1 controller.go:804] Finished updateLoadBalancerHosts
I0902 20:46:27.481652       1 controller.go:731] It took 3.8804e-05 seconds to finish nodeSyncInternal
... skipping 53 lines ...
I0902 20:46:37.580942       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: volume is bound to claim azuredisk-1387/pvc-k8px2
I0902 20:46:37.580963       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: claim azuredisk-1387/pvc-k8px2 found: phase: Bound, bound to: "pvc-73ac796c-4419-4396-8862-a15fcb581aae", bindCompleted: true, boundByController: true
I0902 20:46:37.581022       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: all is bound
I0902 20:46:37.581033       1 pv_controller.go:858] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: set phase Bound
I0902 20:46:37.581047       1 pv_controller.go:861] updating PersistentVolume[pvc-73ac796c-4419-4396-8862-a15fcb581aae]: phase Bound already set
I0902 20:46:37.581098       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 2997
I0902 20:46:37.581129       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: phase: Failed, bound to: "azuredisk-1387/pvc-tk6zc (uid: 9571446a-3972-4090-a129-b8142098251d)", boundByController: true
I0902 20:46:37.581189       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is bound to claim azuredisk-1387/pvc-tk6zc
I0902 20:46:37.581219       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:46:37.581261       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:46:37.581289       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:46:37.581370       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9571446a-3972-4090-a129-b8142098251d] started
I0902 20:46:37.588161       1 pv_controller.go:1340] isVolumeReleased[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is released
... skipping 2 lines ...
I0902 20:46:42.809402       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-9571446a-3972-4090-a129-b8142098251d
I0902 20:46:42.809455       1 pv_controller.go:1435] volume "pvc-9571446a-3972-4090-a129-b8142098251d" deleted
I0902 20:46:42.809500       1 pv_controller.go:1283] deleteVolumeOperation [pvc-9571446a-3972-4090-a129-b8142098251d]: success
I0902 20:46:42.823193       1 pv_protection_controller.go:205] Got event on PV pvc-9571446a-3972-4090-a129-b8142098251d
I0902 20:46:42.823222       1 pv_protection_controller.go:125] Processing PV pvc-9571446a-3972-4090-a129-b8142098251d
I0902 20:46:42.823494       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9571446a-3972-4090-a129-b8142098251d" with version 3095
I0902 20:46:42.823837       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: phase: Failed, bound to: "azuredisk-1387/pvc-tk6zc (uid: 9571446a-3972-4090-a129-b8142098251d)", boundByController: true
I0902 20:46:42.824028       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: volume is bound to claim azuredisk-1387/pvc-tk6zc
I0902 20:46:42.824146       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: claim azuredisk-1387/pvc-tk6zc not found
I0902 20:46:42.824261       1 pv_controller.go:1108] reclaimVolume[pvc-9571446a-3972-4090-a129-b8142098251d]: policy is Delete
I0902 20:46:42.824367       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9571446a-3972-4090-a129-b8142098251d[da7fbf00-06f3-4674-8af6-45d7c943596a]]
I0902 20:46:42.824501       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9571446a-3972-4090-a129-b8142098251d] started
I0902 20:46:42.828920       1 pv_controller.go:1243] Volume "pvc-9571446a-3972-4090-a129-b8142098251d" is already being deleted
... skipping 412 lines ...
I0902 20:47:08.989599       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-b8pcm.171125fb67d43da7, uid a007b92c-7f3d-4a43-a351-ab4b50ca1ce4, event type delete
I0902 20:47:08.993527       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-k8px2.171125fad4b18f82, uid a5931333-2b7d-4f19-a181-9681f0b5836d, event type delete
I0902 20:47:08.997315       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-k8px2.171125fb6d2d7331, uid ec748e57-22d0-4501-badb-16a2a603941b, event type delete
I0902 20:47:09.001916       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-tk6zc.171125fad87c9554, uid c7fd2d88-117f-4729-81ab-95cc54cbf4a4, event type delete
I0902 20:47:09.005373       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1387, name pvc-tk6zc.171125fb7b40e3bf, uid 40d5d92c-1e2a-4dce-8eb9-bf71fb643ff7, event type delete
I0902 20:47:09.011542       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1387, name default-token-5s8wb, uid 2c65ff8b-429d-42c6-b974-b741126a0c0e, event type delete
E0902 20:47:09.023192       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1387/default: secrets "default-token-qz8cm" is forbidden: unable to create new content in namespace azuredisk-1387 because it is being terminated
I0902 20:47:09.039818       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1387/default), service account deleted, removing tokens
I0902 20:47:09.039879       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1387, name default, uid d219a2bd-c07a-4e72-ba3e-c3ec486831fb, event type delete
I0902 20:47:09.040010       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1387" (2.101µs)
I0902 20:47:09.060233       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1387, name kube-root-ca.crt, uid a6fde289-d502-4bc0-9df1-8fce8c72fc3c, event type delete
I0902 20:47:09.066895       1 publisher.go:186] Finished syncing namespace "azuredisk-1387" (6.61687ms)
I0902 20:47:09.087627       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1387, estimate: 0, errors: <nil>
... skipping 162 lines ...
I0902 20:47:35.262449       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:47:35.262461       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:47:35.262468       1 pv_controller.go:1763] operation "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" is already running, skipping
I0902 20:47:35.262537       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba] started
I0902 20:47:35.264713       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is released
I0902 20:47:35.264732       1 pv_controller.go:1404] doDeleteVolume [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]
I0902 20:47:35.288254       1 pv_controller.go:1259] deletion of volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:35.288272       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: set phase Failed
I0902 20:47:35.288279       1 pv_controller.go:858] updating PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: set phase Failed
I0902 20:47:35.291175       1 pv_protection_controller.go:205] Got event on PV pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba
I0902 20:47:35.291274       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3258
I0902 20:47:35.291470       1 pv_controller.go:879] volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" entered phase "Failed"
I0902 20:47:35.291484       1 pv_controller.go:901] volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:35.291371       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3258
I0902 20:47:35.291542       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase: Failed, bound to: "azuredisk-4547/pvc-8n4gm (uid: 7dc49c6d-9bd8-468c-9881-27f2a78ae2ba)", boundByController: true
I0902 20:47:35.291589       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is bound to claim azuredisk-4547/pvc-8n4gm
I0902 20:47:35.291611       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: claim azuredisk-4547/pvc-8n4gm not found
I0902 20:47:35.291619       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:47:35.291654       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:47:35.291664       1 pv_controller.go:1763] operation "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" is already running, skipping
E0902 20:47:35.291784       1 goroutinemap.go:150] Operation for "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" failed. No retries permitted until 2022-09-02 20:47:35.791764902 +0000 UTC m=+1180.854378793 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:35.292092       1 event.go:291] "Event occurred" object="pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
I0902 20:47:37.449680       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:47:37.465190       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:47:37.519979       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="56.106µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54738" resp=200
I0902 20:47:37.583495       1 pv_controller_base.go:528] resyncing PV controller
I0902 20:47:37.583569       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" with version 3162
I0902 20:47:37.583651       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: phase: Bound, bound to: "azuredisk-4547/pvc-9v8fc (uid: e117ae25-a514-43f4-a46b-82944814fc31)", boundByController: true
I0902 20:47:37.583686       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: volume is bound to claim azuredisk-4547/pvc-9v8fc
I0902 20:47:37.583755       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: claim azuredisk-4547/pvc-9v8fc found: phase: Bound, bound to: "pvc-e117ae25-a514-43f4-a46b-82944814fc31", bindCompleted: true, boundByController: true
I0902 20:47:37.583775       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: all is bound
I0902 20:47:37.583819       1 pv_controller.go:858] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: set phase Bound
I0902 20:47:37.583854       1 pv_controller.go:861] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: phase Bound already set
I0902 20:47:37.583873       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3258
I0902 20:47:37.583930       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase: Failed, bound to: "azuredisk-4547/pvc-8n4gm (uid: 7dc49c6d-9bd8-468c-9881-27f2a78ae2ba)", boundByController: true
I0902 20:47:37.584003       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is bound to claim azuredisk-4547/pvc-8n4gm
I0902 20:47:37.584027       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: claim azuredisk-4547/pvc-8n4gm not found
I0902 20:47:37.584037       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:47:37.584086       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:47:37.584251       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba] started
I0902 20:47:37.584590       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-4547/pvc-9v8fc" with version 3164
... skipping 11 lines ...
I0902 20:47:37.587524       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4547/pvc-9v8fc] status: phase Bound already set
I0902 20:47:37.587541       1 pv_controller.go:1038] volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" bound to claim "azuredisk-4547/pvc-9v8fc"
I0902 20:47:37.587568       1 pv_controller.go:1039] volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" status after binding: phase: Bound, bound to: "azuredisk-4547/pvc-9v8fc (uid: e117ae25-a514-43f4-a46b-82944814fc31)", boundByController: true
I0902 20:47:37.587599       1 pv_controller.go:1040] claim "azuredisk-4547/pvc-9v8fc" status after binding: phase: Bound, bound to: "pvc-e117ae25-a514-43f4-a46b-82944814fc31", bindCompleted: true, boundByController: true
I0902 20:47:37.594665       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is released
I0902 20:47:37.594684       1 pv_controller.go:1404] doDeleteVolume [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]
I0902 20:47:37.617428       1 pv_controller.go:1259] deletion of volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:37.617449       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: set phase Failed
I0902 20:47:37.617462       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase Failed already set
E0902 20:47:37.617539       1 goroutinemap.go:150] Operation for "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" failed. No retries permitted until 2022-09-02 20:47:38.617502695 +0000 UTC m=+1183.680116686 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:37.687761       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0902 20:47:38.412084       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:47:38.975324       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-clawep-mp-0000000"
I0902 20:47:38.975361       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba to the node "capz-clawep-mp-0000000" mounted false
I0902 20:47:38.975373       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31 to the node "capz-clawep-mp-0000000" mounted false
I0902 20:47:39.034959       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31\"}]}}" for node "capz-clawep-mp-0000000" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31 1}]
... skipping 27 lines ...
I0902 20:47:52.584676       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-4547/pvc-9v8fc" with version 3164
I0902 20:47:52.585321       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-4547/pvc-9v8fc]: phase: Bound, bound to: "pvc-e117ae25-a514-43f4-a46b-82944814fc31", bindCompleted: true, boundByController: true
I0902 20:47:52.585227       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: all is bound
I0902 20:47:52.585449       1 pv_controller.go:858] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: set phase Bound
I0902 20:47:52.585480       1 pv_controller.go:861] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: phase Bound already set
I0902 20:47:52.585532       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3258
I0902 20:47:52.585590       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase: Failed, bound to: "azuredisk-4547/pvc-8n4gm (uid: 7dc49c6d-9bd8-468c-9881-27f2a78ae2ba)", boundByController: true
I0902 20:47:52.585662       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is bound to claim azuredisk-4547/pvc-8n4gm
I0902 20:47:52.585720       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: claim azuredisk-4547/pvc-8n4gm not found
I0902 20:47:52.585763       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:47:52.585833       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:47:52.585891       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba] started
I0902 20:47:52.585554       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-4547/pvc-9v8fc]: volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" found: phase: Bound, bound to: "azuredisk-4547/pvc-9v8fc (uid: e117ae25-a514-43f4-a46b-82944814fc31)", boundByController: true
... skipping 9 lines ...
I0902 20:47:52.586291       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4547/pvc-9v8fc] status: phase Bound already set
I0902 20:47:52.586321       1 pv_controller.go:1038] volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" bound to claim "azuredisk-4547/pvc-9v8fc"
I0902 20:47:52.586341       1 pv_controller.go:1039] volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" status after binding: phase: Bound, bound to: "azuredisk-4547/pvc-9v8fc (uid: e117ae25-a514-43f4-a46b-82944814fc31)", boundByController: true
I0902 20:47:52.586381       1 pv_controller.go:1040] claim "azuredisk-4547/pvc-9v8fc" status after binding: phase: Bound, bound to: "pvc-e117ae25-a514-43f4-a46b-82944814fc31", bindCompleted: true, boundByController: true
I0902 20:47:52.597290       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is released
I0902 20:47:52.597307       1 pv_controller.go:1404] doDeleteVolume [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]
I0902 20:47:52.621546       1 pv_controller.go:1259] deletion of volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:52.621572       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: set phase Failed
I0902 20:47:52.621584       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase Failed already set
E0902 20:47:52.621750       1 goroutinemap.go:150] Operation for "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" failed. No retries permitted until 2022-09-02 20:47:54.621594264 +0000 UTC m=+1199.684208255 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:47:54.428828       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31) returned with <nil>
I0902 20:47:54.428926       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31) succeeded
I0902 20:47:54.428947       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31 was detached from node:capz-clawep-mp-0000000
I0902 20:47:54.428976       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-e117ae25-a514-43f4-a46b-82944814fc31" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-e117ae25-a514-43f4-a46b-82944814fc31") on node "capz-clawep-mp-0000000" 
I0902 20:47:54.429149       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-clawep-mp-0000000, refreshing the cache
I0902 20:47:54.527820       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba"
... skipping 37 lines ...
I0902 20:48:07.588768       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: volume is bound to claim azuredisk-4547/pvc-9v8fc
I0902 20:48:07.588944       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: claim azuredisk-4547/pvc-9v8fc found: phase: Bound, bound to: "pvc-e117ae25-a514-43f4-a46b-82944814fc31", bindCompleted: true, boundByController: true
I0902 20:48:07.588967       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: all is bound
I0902 20:48:07.589007       1 pv_controller.go:858] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: set phase Bound
I0902 20:48:07.589018       1 pv_controller.go:861] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: phase Bound already set
I0902 20:48:07.589037       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3258
I0902 20:48:07.589091       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase: Failed, bound to: "azuredisk-4547/pvc-8n4gm (uid: 7dc49c6d-9bd8-468c-9881-27f2a78ae2ba)", boundByController: true
I0902 20:48:07.589129       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is bound to claim azuredisk-4547/pvc-8n4gm
I0902 20:48:07.589180       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: claim azuredisk-4547/pvc-8n4gm not found
I0902 20:48:07.589202       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:48:07.589223       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:48:07.589301       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba] started
I0902 20:48:07.591921       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is released
I0902 20:48:07.591939       1 pv_controller.go:1404] doDeleteVolume [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]
I0902 20:48:07.591993       1 pv_controller.go:1259] deletion of volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) since it's in attaching or detaching state
I0902 20:48:07.592006       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: set phase Failed
I0902 20:48:07.592016       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase Failed already set
E0902 20:48:07.592065       1 goroutinemap.go:150] Operation for "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" failed. No retries permitted until 2022-09-02 20:48:11.59202362 +0000 UTC m=+1216.654637511 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) since it's in attaching or detaching state
I0902 20:48:07.614165       1 gc_controller.go:161] GC'ing orphaned
I0902 20:48:07.614536       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:48:08.429510       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:48:09.830115       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) returned with <nil>
I0902 20:48:09.830176       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba) succeeded
I0902 20:48:09.830188       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba was detached from node:capz-clawep-mp-0000000
... skipping 23 lines ...
I0902 20:48:22.585769       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: volume is bound to claim azuredisk-4547/pvc-9v8fc
I0902 20:48:22.585790       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: claim azuredisk-4547/pvc-9v8fc found: phase: Bound, bound to: "pvc-e117ae25-a514-43f4-a46b-82944814fc31", bindCompleted: true, boundByController: true
I0902 20:48:22.585809       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: all is bound
I0902 20:48:22.585824       1 pv_controller.go:858] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: set phase Bound
I0902 20:48:22.585834       1 pv_controller.go:861] updating PersistentVolume[pvc-e117ae25-a514-43f4-a46b-82944814fc31]: phase Bound already set
I0902 20:48:22.585850       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3258
I0902 20:48:22.585876       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase: Failed, bound to: "azuredisk-4547/pvc-8n4gm (uid: 7dc49c6d-9bd8-468c-9881-27f2a78ae2ba)", boundByController: true
I0902 20:48:22.585897       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is bound to claim azuredisk-4547/pvc-8n4gm
I0902 20:48:22.585922       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: claim azuredisk-4547/pvc-8n4gm not found
I0902 20:48:22.585955       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:48:22.585975       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:48:22.586025       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba] started
I0902 20:48:22.596257       1 pv_controller.go:1340] isVolumeReleased[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is released
... skipping 4 lines ...
I0902 20:48:27.767936       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba
I0902 20:48:27.767967       1 pv_controller.go:1435] volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" deleted
I0902 20:48:27.767980       1 pv_controller.go:1283] deleteVolumeOperation [pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: success
I0902 20:48:27.774540       1 pv_protection_controller.go:205] Got event on PV pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba
I0902 20:48:27.774574       1 pv_protection_controller.go:125] Processing PV pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba
I0902 20:48:27.774886       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" with version 3336
I0902 20:48:27.774928       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: phase: Failed, bound to: "azuredisk-4547/pvc-8n4gm (uid: 7dc49c6d-9bd8-468c-9881-27f2a78ae2ba)", boundByController: true
I0902 20:48:27.774956       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: volume is bound to claim azuredisk-4547/pvc-8n4gm
I0902 20:48:27.774981       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: claim azuredisk-4547/pvc-8n4gm not found
I0902 20:48:27.774992       1 pv_controller.go:1108] reclaimVolume[pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba]: policy is Delete
I0902 20:48:27.775008       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]]
I0902 20:48:27.775016       1 pv_controller.go:1763] operation "delete-pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba[e9d64744-9580-4ec9-a626-d521172f345c]" is already running, skipping
I0902 20:48:27.783170       1 pv_controller_base.go:235] volume "pvc-7dc49c6d-9bd8-468c-9881-27f2a78ae2ba" deleted
... skipping 347 lines ...
I0902 20:48:45.971356       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7") from node "capz-clawep-mp-0000000" 
I0902 20:48:45.971523       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-clawep-mp-0000000, refreshing the cache
I0902 20:48:45.971415       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-eab618b3-bfbc-4d3a-87d9-372567823396" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-eab618b3-bfbc-4d3a-87d9-372567823396") from node "capz-clawep-mp-0000000" 
I0902 20:48:45.971819       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093") from node "capz-clawep-mp-0000000" 
I0902 20:48:46.069227       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4547
I0902 20:48:46.088854       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4547, name default-token-qh57c, uid 0922616a-d467-4ad3-b94f-3d8db7dc9460, event type delete
E0902 20:48:46.117207       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4547/default: secrets "default-token-bwwb7" is forbidden: unable to create new content in namespace azuredisk-4547 because it is being terminated
I0902 20:48:46.146895       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-9bmx6.171126194ac65614, uid d0005b28-0757-4671-aeaa-6f815bbfd221, event type delete
I0902 20:48:46.152822       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-9bmx6.1711261bd68e7dcf, uid f9254dfd-947b-42e5-b34c-1e2f8c9ee5b5, event type delete
I0902 20:48:46.156405       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-9bmx6.1711261c4cd4c95f, uid 4f97b1b0-4e59-4688-9b1f-d83b451b8f35, event type delete
I0902 20:48:46.159841       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-9bmx6.1711261c4cd53b70, uid bbf21d12-4ccc-4027-932a-13887d038ccb, event type delete
I0902 20:48:46.167001       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-9bmx6.1711261e3f1102f1, uid 52f4a2a4-af1d-4b6d-adcc-4e76d32cab02, event type delete
I0902 20:48:46.174151       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4547, name azuredisk-volume-tester-9bmx6.1711261eadd21b16, uid 053e9358-b9f2-48d0-9c80-33ee36c225ee, event type delete
... skipping 12 lines ...
I0902 20:48:46.283990       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4547, estimate: 0, errors: <nil>
I0902 20:48:46.292406       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4547" (225.725606ms)
I0902 20:48:46.633098       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0902 20:48:46.684478       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid c0db588a-c304-42db-a42b-d13f73805703, event type delete
I0902 20:48:46.688158       1 publisher.go:186] Finished syncing namespace "azuredisk-7051" (3.626968ms)
I0902 20:48:46.691685       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-hhx8h, uid cc69993c-9b00-4ab4-8ac5-b05ec72f89d0, event type delete
E0902 20:48:46.704557       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-nddgh" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0902 20:48:46.746652       1 tokens_controller.go:252] syncServiceAccount(azuredisk-7051/default), service account deleted, removing tokens
I0902 20:48:46.747662       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-7051, name default, uid 37e2e775-8931-4a4a-8870-ab1d78d78c57, event type delete
I0902 20:48:46.747709       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (26.202µs)
I0902 20:48:46.781213       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7051" (2.5µs)
I0902 20:48:46.781176       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7051, estimate: 0, errors: <nil>
I0902 20:48:46.796020       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7051" (165.937039ms)
... skipping 378 lines ...
I0902 20:49:25.866651       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
I0902 20:49:25.866666       1 pv_controller.go:1752] scheduleOperation[delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]]
I0902 20:49:25.866674       1 pv_controller.go:1763] operation "delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]" is already running, skipping
I0902 20:49:25.866701       1 pv_controller.go:1231] deleteVolumeOperation [pvc-091dddad-5413-4a5a-b5e2-346aa248e093] started
I0902 20:49:25.868429       1 pv_controller.go:1340] isVolumeReleased[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is released
I0902 20:49:25.868445       1 pv_controller.go:1404] doDeleteVolume [pvc-091dddad-5413-4a5a-b5e2-346aa248e093]
I0902 20:49:25.909957       1 pv_controller.go:1259] deletion of volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:49:25.910152       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: set phase Failed
I0902 20:49:25.910171       1 pv_controller.go:858] updating PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: set phase Failed
I0902 20:49:25.917844       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" with version 3533
I0902 20:49:25.917871       1 pv_controller.go:879] volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" entered phase "Failed"
I0902 20:49:25.917881       1 pv_controller.go:901] volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
E0902 20:49:25.917917       1 goroutinemap.go:150] Operation for "delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]" failed. No retries permitted until 2022-09-02 20:49:26.417899114 +0000 UTC m=+1291.480513005 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:49:25.918357       1 pv_protection_controller.go:205] Got event on PV pvc-091dddad-5413-4a5a-b5e2-346aa248e093
I0902 20:49:25.918393       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" with version 3533
I0902 20:49:25.918430       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase: Failed, bound to: "azuredisk-7578/pvc-4jszx (uid: 091dddad-5413-4a5a-b5e2-346aa248e093)", boundByController: true
I0902 20:49:25.918717       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is bound to claim azuredisk-7578/pvc-4jszx
I0902 20:49:25.918891       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: claim azuredisk-7578/pvc-4jszx not found
I0902 20:49:25.919049       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
I0902 20:49:25.919203       1 pv_controller.go:1752] scheduleOperation[delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]]
I0902 20:49:25.919335       1 pv_controller.go:1765] operation "delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]" postponed due to exponential backoff
I0902 20:49:25.918646       1 event.go:291] "Event occurred" object="pvc-091dddad-5413-4a5a-b5e2-346aa248e093" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted"
... skipping 83 lines ...
I0902 20:49:37.591198       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-7578/pvc-5cjdk]: already bound to "pvc-eab618b3-bfbc-4d3a-87d9-372567823396"
I0902 20:49:37.591212       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-7578/pvc-5cjdk] status: set phase Bound
I0902 20:49:37.591231       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-5cjdk] status: phase Bound already set
I0902 20:49:37.591244       1 pv_controller.go:1038] volume "pvc-eab618b3-bfbc-4d3a-87d9-372567823396" bound to claim "azuredisk-7578/pvc-5cjdk"
I0902 20:49:37.591276       1 pv_controller.go:1039] volume "pvc-eab618b3-bfbc-4d3a-87d9-372567823396" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-5cjdk (uid: eab618b3-bfbc-4d3a-87d9-372567823396)", boundByController: true
I0902 20:49:37.591294       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-5cjdk" status after binding: phase: Bound, bound to: "pvc-eab618b3-bfbc-4d3a-87d9-372567823396", bindCompleted: true, boundByController: true
I0902 20:49:37.591317       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase: Failed, bound to: "azuredisk-7578/pvc-4jszx (uid: 091dddad-5413-4a5a-b5e2-346aa248e093)", boundByController: true
I0902 20:49:37.591340       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is bound to claim azuredisk-7578/pvc-4jszx
I0902 20:49:37.591360       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: claim azuredisk-7578/pvc-4jszx not found
I0902 20:49:37.591368       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
I0902 20:49:37.591383       1 pv_controller.go:1752] scheduleOperation[delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]]
I0902 20:49:37.591408       1 pv_controller.go:1231] deleteVolumeOperation [pvc-091dddad-5413-4a5a-b5e2-346aa248e093] started
I0902 20:49:37.596374       1 pv_controller.go:1340] isVolumeReleased[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is released
I0902 20:49:37.596395       1 pv_controller.go:1404] doDeleteVolume [pvc-091dddad-5413-4a5a-b5e2-346aa248e093]
I0902 20:49:37.630913       1 pv_controller.go:1259] deletion of volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:49:37.630933       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: set phase Failed
I0902 20:49:37.630945       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase Failed already set
E0902 20:49:37.630974       1 goroutinemap.go:150] Operation for "delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]" failed. No retries permitted until 2022-09-02 20:49:38.630954131 +0000 UTC m=+1303.693568022 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:49:38.477963       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:49:44.447853       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7) returned with <nil>
I0902 20:49:44.447919       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7) succeeded
I0902 20:49:44.447962       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7 was detached from node:capz-clawep-mp-0000000
I0902 20:49:44.448007       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7") on node "capz-clawep-mp-0000000" 
I0902 20:49:44.448201       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-clawep-mp-0000000, refreshing the cache
... skipping 23 lines ...
I0902 20:49:52.591775       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: volume is bound to claim azuredisk-7578/pvc-5cjdk
I0902 20:49:52.591796       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: claim azuredisk-7578/pvc-5cjdk found: phase: Bound, bound to: "pvc-eab618b3-bfbc-4d3a-87d9-372567823396", bindCompleted: true, boundByController: true
I0902 20:49:52.591815       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: all is bound
I0902 20:49:52.591826       1 pv_controller.go:858] updating PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: set phase Bound
I0902 20:49:52.591839       1 pv_controller.go:861] updating PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: phase Bound already set
I0902 20:49:52.591852       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" with version 3533
I0902 20:49:52.591877       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase: Failed, bound to: "azuredisk-7578/pvc-4jszx (uid: 091dddad-5413-4a5a-b5e2-346aa248e093)", boundByController: true
I0902 20:49:52.591904       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is bound to claim azuredisk-7578/pvc-4jszx
I0902 20:49:52.591931       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: claim azuredisk-7578/pvc-4jszx not found
I0902 20:49:52.591941       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
I0902 20:49:52.591961       1 pv_controller.go:1752] scheduleOperation[delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]]
I0902 20:49:52.591996       1 pv_controller.go:1231] deleteVolumeOperation [pvc-091dddad-5413-4a5a-b5e2-346aa248e093] started
I0902 20:49:52.592329       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-8vl8f" with version 3408
... skipping 27 lines ...
I0902 20:49:52.592769       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-5cjdk] status: phase Bound already set
I0902 20:49:52.592782       1 pv_controller.go:1038] volume "pvc-eab618b3-bfbc-4d3a-87d9-372567823396" bound to claim "azuredisk-7578/pvc-5cjdk"
I0902 20:49:52.592801       1 pv_controller.go:1039] volume "pvc-eab618b3-bfbc-4d3a-87d9-372567823396" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-5cjdk (uid: eab618b3-bfbc-4d3a-87d9-372567823396)", boundByController: true
I0902 20:49:52.592819       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-5cjdk" status after binding: phase: Bound, bound to: "pvc-eab618b3-bfbc-4d3a-87d9-372567823396", bindCompleted: true, boundByController: true
I0902 20:49:52.595928       1 pv_controller.go:1340] isVolumeReleased[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is released
I0902 20:49:52.595961       1 pv_controller.go:1404] doDeleteVolume [pvc-091dddad-5413-4a5a-b5e2-346aa248e093]
I0902 20:49:52.619141       1 pv_controller.go:1259] deletion of volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:49:52.619174       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: set phase Failed
I0902 20:49:52.619184       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase Failed already set
E0902 20:49:52.619211       1 goroutinemap.go:150] Operation for "delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]" failed. No retries permitted until 2022-09-02 20:49:54.619193658 +0000 UTC m=+1319.681807549 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/virtualMachineScaleSets/capz-clawep-mp-0/virtualMachines/capz-clawep-mp-0_0), could not be deleted
I0902 20:49:56.783815       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 3 items received
I0902 20:49:57.519806       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.207µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59984" resp=200
I0902 20:49:58.405411       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 8 items received
I0902 20:49:59.762359       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-eab618b3-bfbc-4d3a-87d9-372567823396) returned with <nil>
I0902 20:49:59.762418       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-eab618b3-bfbc-4d3a-87d9-372567823396) succeeded
I0902 20:49:59.762428       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-eab618b3-bfbc-4d3a-87d9-372567823396 was detached from node:capz-clawep-mp-0000000
... skipping 11 lines ...
I0902 20:50:07.592421       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: volume is bound to claim azuredisk-7578/pvc-5cjdk
I0902 20:50:07.592520       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: claim azuredisk-7578/pvc-5cjdk found: phase: Bound, bound to: "pvc-eab618b3-bfbc-4d3a-87d9-372567823396", bindCompleted: true, boundByController: true
I0902 20:50:07.592583       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: all is bound
I0902 20:50:07.592620       1 pv_controller.go:858] updating PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: set phase Bound
I0902 20:50:07.592713       1 pv_controller.go:861] updating PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: phase Bound already set
I0902 20:50:07.592780       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" with version 3533
I0902 20:50:07.592827       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase: Failed, bound to: "azuredisk-7578/pvc-4jszx (uid: 091dddad-5413-4a5a-b5e2-346aa248e093)", boundByController: true
I0902 20:50:07.592925       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is bound to claim azuredisk-7578/pvc-4jszx
I0902 20:50:07.592951       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: claim azuredisk-7578/pvc-4jszx not found
I0902 20:50:07.592959       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
I0902 20:50:07.592976       1 pv_controller.go:1752] scheduleOperation[delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]]
I0902 20:50:07.593019       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" with version 3406
I0902 20:50:07.593082       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7]: phase: Bound, bound to: "azuredisk-7578/pvc-8vl8f (uid: 1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7)", boundByController: true
... skipping 34 lines ...
I0902 20:50:07.596820       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7578/pvc-8vl8f] status: phase Bound already set
I0902 20:50:07.596946       1 pv_controller.go:1038] volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" bound to claim "azuredisk-7578/pvc-8vl8f"
I0902 20:50:07.597064       1 pv_controller.go:1039] volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-8vl8f (uid: 1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7)", boundByController: true
I0902 20:50:07.597194       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-8vl8f" status after binding: phase: Bound, bound to: "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7", bindCompleted: true, boundByController: true
I0902 20:50:07.602343       1 pv_controller.go:1340] isVolumeReleased[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is released
I0902 20:50:07.602362       1 pv_controller.go:1404] doDeleteVolume [pvc-091dddad-5413-4a5a-b5e2-346aa248e093]
I0902 20:50:07.602396       1 pv_controller.go:1259] deletion of volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) since it's in attaching or detaching state
I0902 20:50:07.602411       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: set phase Failed
I0902 20:50:07.602422       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase Failed already set
E0902 20:50:07.602472       1 goroutinemap.go:150] Operation for "delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]" failed. No retries permitted until 2022-09-02 20:50:11.602430818 +0000 UTC m=+1336.665044809 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) since it's in attaching or detaching state
I0902 20:50:07.619800       1 gc_controller.go:161] GC'ing orphaned
I0902 20:50:07.619852       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:50:08.489980       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0902 20:50:17.519414       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="172.817µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56522" resp=200
I0902 20:50:18.685394       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 2 items received
I0902 20:50:20.175596       1 azure_controller_vmss.go:187] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093) returned with <nil>
... skipping 29 lines ...
I0902 20:50:22.593380       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: all is bound
I0902 20:50:22.593386       1 pv_controller.go:858] updating PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: set phase Bound
I0902 20:50:22.593389       1 pv_controller.go:1038] volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" bound to claim "azuredisk-7578/pvc-8vl8f"
I0902 20:50:22.593394       1 pv_controller.go:861] updating PersistentVolume[pvc-eab618b3-bfbc-4d3a-87d9-372567823396]: phase Bound already set
I0902 20:50:22.593403       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" with version 3533
I0902 20:50:22.593405       1 pv_controller.go:1039] volume "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7" status after binding: phase: Bound, bound to: "azuredisk-7578/pvc-8vl8f (uid: 1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7)", boundByController: true
I0902 20:50:22.593419       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase: Failed, bound to: "azuredisk-7578/pvc-4jszx (uid: 091dddad-5413-4a5a-b5e2-346aa248e093)", boundByController: true
I0902 20:50:22.593418       1 pv_controller.go:1040] claim "azuredisk-7578/pvc-8vl8f" status after binding: phase: Bound, bound to: "pvc-1082b87a-79d0-4af1-93ed-4dcd4a2ce7c7", bindCompleted: true, boundByController: true
I0902 20:50:22.593432       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7578/pvc-5cjdk" with version 3413
I0902 20:50:22.593435       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is bound to claim azuredisk-7578/pvc-4jszx
I0902 20:50:22.593443       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-7578/pvc-5cjdk]: phase: Bound, bound to: "pvc-eab618b3-bfbc-4d3a-87d9-372567823396", bindCompleted: true, boundByController: true
I0902 20:50:22.593450       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: claim azuredisk-7578/pvc-4jszx not found
I0902 20:50:22.593457       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
... skipping 19 lines ...
I0902 20:50:27.620496       1 gc_controller.go:161] GC'ing orphaned
I0902 20:50:27.620552       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0902 20:50:27.838041       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-091dddad-5413-4a5a-b5e2-346aa248e093
I0902 20:50:27.838080       1 pv_controller.go:1435] volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" deleted
I0902 20:50:27.838092       1 pv_controller.go:1283] deleteVolumeOperation [pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: success
I0902 20:50:27.846544       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-091dddad-5413-4a5a-b5e2-346aa248e093" with version 3625
I0902 20:50:27.846733       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: phase: Failed, bound to: "azuredisk-7578/pvc-4jszx (uid: 091dddad-5413-4a5a-b5e2-346aa248e093)", boundByController: true
I0902 20:50:27.846852       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: volume is bound to claim azuredisk-7578/pvc-4jszx
I0902 20:50:27.846914       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: claim azuredisk-7578/pvc-4jszx not found
I0902 20:50:27.846926       1 pv_controller.go:1108] reclaimVolume[pvc-091dddad-5413-4a5a-b5e2-346aa248e093]: policy is Delete
I0902 20:50:27.846942       1 pv_controller.go:1752] scheduleOperation[delete-pvc-091dddad-5413-4a5a-b5e2-346aa248e093[47e83090-48bd-4fa0-b11f-dca053d70ed3]]
I0902 20:50:27.846969       1 pv_controller.go:1231] deleteVolumeOperation [pvc-091dddad-5413-4a5a-b5e2-346aa248e093] started
I0902 20:50:27.847252       1 pv_protection_controller.go:205] Got event on PV pvc-091dddad-5413-4a5a-b5e2-346aa248e093
... skipping 297 lines ...
I0902 20:50:57.337239       1 pv_controller.go:1039] volume "pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab" status after binding: phase: Bound, bound to: "azuredisk-8666/pvc-s4zp2 (uid: 5506ba72-28ff-4849-8271-0b604aa2c2ab)", boundByController: true
I0902 20:50:57.337468       1 pv_controller.go:1040] claim "azuredisk-8666/pvc-s4zp2" status after binding: phase: Bound, bound to: "pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab", bindCompleted: true, boundByController: true
I0902 20:50:57.337502       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-8666/pvc-s4zp2"
I0902 20:50:57.518984       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="82.808µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34094" resp=200
I0902 20:50:57.574332       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1968
I0902 20:50:57.648487       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1968, name default-token-qnwcv, uid 61d6f66b-7eeb-44b3-a660-c5ab5cb79849, event type delete
E0902 20:50:57.660460       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1968/default: secrets "default-token-mqxgz" is forbidden: unable to create new content in namespace azuredisk-1968 because it is being terminated
I0902 20:50:57.685073       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1968/default), service account deleted, removing tokens
I0902 20:50:57.685861       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1968" (3.201µs)
I0902 20:50:57.685904       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1968, name default, uid 897c1a0e-5da8-4d8c-afb9-bf55933a9392, event type delete
I0902 20:50:57.701576       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1968, name kube-root-ca.crt, uid 01e67513-7bb0-4bad-9c7f-a48bbe8113d1, event type delete
I0902 20:50:57.704322       1 publisher.go:186] Finished syncing namespace "azuredisk-1968" (2.693473ms)
I0902 20:50:57.711785       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1968, estimate: 0, errors: <nil>
... skipping 9 lines ...
I0902 20:50:57.788039       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-pmcqx"
I0902 20:50:57.809709       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-clawep-dynamic-pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab" to node "capz-clawep-mp-0000001".
I0902 20:50:57.852847       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab" lun 0 to node "capz-clawep-mp-0000001".
I0902 20:50:57.852891       1 azure_controller_vmss.go:101] azureDisk - update(capz-clawep): vm(capz-clawep-mp-0000001) - attach disk(capz-clawep-dynamic-pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-clawep/providers/Microsoft.Compute/disks/capz-clawep-dynamic-pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab) with DiskEncryptionSetID()
I0902 20:50:58.139492       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4657
I0902 20:50:58.191626       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4657, name default-token-5lm28, uid 58a31f8f-b688-40e3-91cf-c0076803b27d, event type delete
E0902 20:50:58.204131       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4657/default: secrets "default-token-s7988" is forbidden: unable to create new content in namespace azuredisk-4657 because it is being terminated
I0902 20:50:58.204712       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4657, name kube-root-ca.crt, uid 19fec43a-374f-473e-aad7-95768df7dabf, event type delete
I0902 20:50:58.206879       1 publisher.go:186] Finished syncing namespace "azuredisk-4657" (2.115914ms)
I0902 20:50:58.278474       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4657/default), service account deleted, removing tokens
I0902 20:50:58.278686       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4657, name default, uid ded538d6-33d5-4d9e-8ca0-ea366bd256b9, event type delete
I0902 20:50:58.278858       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4657" (2.5µs)
I0902 20:50:58.296777       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4657" (2µs)
I0902 20:50:58.297345       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4657, estimate: 0, errors: <nil>
I0902 20:50:58.306358       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4657" (170.659761ms)
I0902 20:50:58.681355       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1359
I0902 20:50:58.694755       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1359, name kube-root-ca.crt, uid f2f6a676-3216-44fd-be27-2238abae132c, event type delete
I0902 20:50:58.696449       1 publisher.go:186] Finished syncing namespace "azuredisk-1359" (1.941897ms)
I0902 20:50:58.708798       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1359, name default-token-788qs, uid 1411ffa9-8c0a-4a0d-84fe-2f4a2598bd71, event type delete
E0902 20:50:58.725396       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1359/default: secrets "default-token-q2gbt" is forbidden: unable to create new content in namespace azuredisk-1359 because it is being terminated
I0902 20:50:58.755073       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1359/default), service account deleted, removing tokens
I0902 20:50:58.755119       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1359, name default, uid 97badf34-c071-4468-b218-9f120027ef06, event type delete
I0902 20:50:58.755169       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (2µs)
I0902 20:50:58.847976       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1359, estimate: 0, errors: <nil>
I0902 20:50:58.848277       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1359" (2.3µs)
I0902 20:50:58.859040       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1359" (180.428248ms)
I0902 20:50:59.221695       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-565
I0902 20:50:59.274475       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-565, name kube-root-ca.crt, uid 79a78c66-5bba-461e-9050-46165c89f404, event type delete
I0902 20:50:59.278310       1 publisher.go:186] Finished syncing namespace "azuredisk-565" (3.632567ms)
I0902 20:50:59.336079       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-565, name default-token-s594k, uid b85b852b-0a8d-4e9d-a696-d9d9a1793034, event type delete
E0902 20:50:59.346936       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-565/default: secrets "default-token-rhhl6" is forbidden: unable to create new content in namespace azuredisk-565 because it is being terminated
I0902 20:50:59.394436       1 tokens_controller.go:252] syncServiceAccount(azuredisk-565/default), service account deleted, removing tokens
I0902 20:50:59.394480       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-565, name default, uid f80e3db3-af7a-45f8-b03d-040ea197be22, event type delete
I0902 20:50:59.394508       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (1.9µs)
I0902 20:50:59.412138       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-565" (1.5µs)
I0902 20:50:59.413102       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-565, estimate: 0, errors: <nil>
I0902 20:50:59.424605       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-565" (205.788513ms)
... skipping 287 lines ...
I0902 20:52:19.315817       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab]: volume is bound to claim azuredisk-8666/pvc-s4zp2
I0902 20:52:19.315890       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab]: claim azuredisk-8666/pvc-s4zp2 not found
I0902 20:52:19.315902       1 pv_controller.go:1108] reclaimVolume[pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab]: policy is Delete
I0902 20:52:19.315921       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab[7920741e-068c-42ce-94f8-0bae2949215b]]
I0902 20:52:19.316006       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab] started
I0902 20:52:19.323883       1 pv_controller_base.go:235] volume "pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab" deleted
I0902 20:52:19.323890       1 pv_controller.go:1238] error reading persistent volume "pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab": persistentvolumes "pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab" not found
I0902 20:52:19.323990       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab
I0902 20:52:19.324387       1 pv_protection_controller.go:128] Finished processing PV pvc-5506ba72-28ff-4849-8271-0b604aa2c2ab (11.14893ms)
I0902 20:52:19.324080       1 pv_controller_base.go:505] deletion of claim "azuredisk-8666/pvc-s4zp2" was already processed
I0902 20:52:20.411155       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 11 items received
I0902 20:52:22.478104       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0902 20:52:22.598589       1 pv_controller_base.go:528] resyncing PV controller
... skipping 153 lines ...
I0902 20:52:29.283920       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-pmcqx.171126520f1351f8, uid 992bf105-ec21-4f86-931d-370191bc6798, event type delete
I0902 20:52:29.286294       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-pmcqx.171126522173811f, uid 4a73eeea-2719-4e24-9f8b-bdaafdf116fa, event type delete
I0902 20:52:29.292288       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name azuredisk-volume-tester-pmcqx.17112652a12c9f1b, uid 0a794c6e-41e5-4f77-a8e9-64eedc41f6d7, event type delete
I0902 20:52:29.295804       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name pvc-s4zp2.1711264e2ec4ea1b, uid 416134eb-e426-41bd-9646-ffc249ca8ff3, event type delete
I0902 20:52:29.299199       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8666, name pvc-s4zp2.1711264eca52cbda, uid 9f6a986b-b95e-4289-8455-d288016f85f3, event type delete
I0902 20:52:29.347771       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8666, name default-token-84khz, uid 03bd323e-51df-4a6e-85a1-76c8fdac53c4, event type delete
E0902 20:52:29.363959       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8666/default: secrets "default-token-nks56" is forbidden: unable to create new content in namespace azuredisk-8666 because it is being terminated
I0902 20:52:29.443726       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8666/default), service account deleted, removing tokens
I0902 20:52:29.443773       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8666, name default, uid da2a69c6-b83b-422d-8368-171f501db2d6, event type delete
I0902 20:52:29.443802       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8666" (2.1µs)
I0902 20:52:29.459873       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8666" (2.3µs)
I0902 20:52:29.461209       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8666, estimate: 0, errors: <nil>
I0902 20:52:29.475077       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8666" (244.257842ms)
... skipping 380 lines ...
I0902 20:53:47.625844       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
2022/09/02 20:53:47 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1310.392 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-6msgr, container manager
STEP: Dumping workload cluster default/capz-clawep logs
Sep  2 20:55:18.329: INFO: Collecting logs for Linux node capz-clawep-control-plane-65svl in cluster capz-clawep in namespace default

Sep  2 20:56:18.331: INFO: Collecting boot logs for AzureMachine capz-clawep-control-plane-65svl

Failed to get logs for machine capz-clawep-control-plane-b29k8, cluster default/capz-clawep: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  2 20:56:19.258: INFO: Collecting logs for Linux node capz-clawep-mp-0000000 in cluster capz-clawep in namespace default

Sep  2 20:57:19.260: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-clawep-mp-0

Sep  2 20:57:19.603: INFO: Collecting logs for Linux node capz-clawep-mp-0000001 in cluster capz-clawep in namespace default

Sep  2 20:58:19.604: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-clawep-mp-0

Failed to get logs for machine pool capz-clawep-mp-0, cluster default/capz-clawep: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-clawep kube-system pod logs
STEP: Collecting events for Pod kube-system/calico-kube-controllers-969cf87c4-7fgvj
STEP: Fetching kube-system pod logs took 481.053168ms
STEP: Dumping workload cluster default/capz-clawep Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-7fgvj, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/etcd-capz-clawep-control-plane-65svl
STEP: Creating log watcher for controller kube-system/calico-node-lfqz6, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-lfqz6
STEP: Collecting events for Pod kube-system/kube-proxy-bc4nd
STEP: failed to find events of Pod "etcd-capz-clawep-control-plane-65svl"
STEP: Creating log watcher for controller kube-system/calico-node-zf97x, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-zf97x
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-clawep-control-plane-65svl, container kube-apiserver
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-clawep-control-plane-65svl, container kube-controller-manager
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-clawep-control-plane-65svl
STEP: Creating log watcher for controller kube-system/kube-proxy-bc4nd, container kube-proxy
STEP: failed to find events of Pod "kube-controller-manager-capz-clawep-control-plane-65svl"
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-clawep-control-plane-65svl
STEP: failed to find events of Pod "kube-apiserver-capz-clawep-control-plane-65svl"
STEP: Collecting events for Pod kube-system/calico-node-fjl4p
STEP: Collecting events for Pod kube-system/kube-proxy-jk7pj
STEP: Collecting events for Pod kube-system/kube-proxy-hl9jz
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-clawep-control-plane-65svl, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-jk7pj, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-fjl4p, container calico-node
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-pfn4l, container coredns
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-clawep-control-plane-65svl
STEP: failed to find events of Pod "kube-scheduler-capz-clawep-control-plane-65svl"
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-pfn4l
STEP: Creating log watcher for controller kube-system/etcd-capz-clawep-control-plane-65svl, container etcd
STEP: Creating log watcher for controller kube-system/coredns-78fcd69978-hpgbs, container coredns
STEP: Collecting events for Pod kube-system/coredns-78fcd69978-hpgbs
STEP: Creating log watcher for controller kube-system/kube-proxy-hl9jz, container kube-proxy
STEP: Fetching activity logs took 1.766676122s
... skipping 17 lines ...