This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-08-23 17:47
Elapsed52m59s
Revisionmain

Test Failures


AzureDisk CSI Driver End-to-End Tests Dynamic Provisioning [single-az] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 7m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureDisk\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\s\[single\-az\]\sshould\screate\smultiple\sPV\sobjects\,\sbind\sto\sPVCs\sand\sattach\sall\sto\sdifferent\spods\son\sthe\ssame\snode\s\[kubernetes\.io\/azure\-disk\]\s\[disk\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:317
Unexpected error:
    <*errors.errorString | 0xc0003de380>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:734
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 41 Skipped Tests

Error lines from build-log.txt

... skipping 695 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 23 18:02:21.426: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ssj5t" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Aug 23 18:02:21.454: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 28.062812ms
Aug 23 18:02:23.483: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056926546s
Aug 23 18:02:25.515: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089582844s
Aug 23 18:02:27.545: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.1192498s
Aug 23 18:02:29.575: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149562701s
Aug 23 18:02:31.606: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179785481s
Aug 23 18:02:33.641: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 12.214865603s
Aug 23 18:02:35.677: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Pending", Reason="", readiness=false. Elapsed: 14.251376334s
Aug 23 18:02:37.706: INFO: Pod "azuredisk-volume-tester-ssj5t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.280539804s
STEP: Saw pod success
Aug 23 18:02:37.707: INFO: Pod "azuredisk-volume-tester-ssj5t" satisfied condition "Succeeded or Failed"
Aug 23 18:02:37.707: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-ssj5t"
Aug 23 18:02:37.752: INFO: Pod azuredisk-volume-tester-ssj5t has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-ssj5t in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:02:37.846: INFO: deleting PVC "azuredisk-8081"/"pvc-b2ccq"
Aug 23 18:02:37.846: INFO: Deleting PersistentVolumeClaim "pvc-b2ccq"
STEP: waiting for claim's PV "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" to be deleted
Aug 23 18:02:37.876: INFO: Waiting up to 10m0s for PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 to get deleted
Aug 23 18:02:37.908: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Released (32.527495ms)
Aug 23 18:02:42.939: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Failed (5.0637785s)
Aug 23 18:02:47.969: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Failed (10.093614354s)
Aug 23 18:02:53.001: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Failed (15.125009563s)
Aug 23 18:02:58.030: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Failed (20.154250406s)
Aug 23 18:03:03.070: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Failed (25.193992019s)
Aug 23 18:03:08.101: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 found and phase=Failed (30.225230917s)
Aug 23 18:03:13.133: INFO: PersistentVolume pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 was removed
Aug 23 18:03:13.133: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Aug 23 18:03:13.161: INFO: Claim "azuredisk-8081" in namespace "pvc-b2ccq" doesn't exist in the system
Aug 23 18:03:13.161: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-jdn48
Aug 23 18:03:13.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 77 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Aug 23 18:03:29.712: INFO: deleting Pod "azuredisk-3274"/"azuredisk-volume-tester-6jp2j"
Aug 23 18:03:29.756: INFO: Error getting logs for pod azuredisk-volume-tester-6jp2j: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-6jp2j)
STEP: Deleting pod azuredisk-volume-tester-6jp2j in namespace azuredisk-3274
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:03:29.843: INFO: deleting PVC "azuredisk-3274"/"pvc-dxmxr"
Aug 23 18:03:29.843: INFO: Deleting PersistentVolumeClaim "pvc-dxmxr"
STEP: waiting for claim's PV "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" to be deleted
... skipping 18 lines ...
Aug 23 18:04:55.436: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Bound (1m25.562163205s)
Aug 23 18:05:00.473: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Bound (1m30.59886036s)
Aug 23 18:05:05.504: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Bound (1m35.629795606s)
Aug 23 18:05:10.534: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Bound (1m40.659812958s)
Aug 23 18:05:15.563: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Bound (1m45.689106994s)
Aug 23 18:05:20.593: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Bound (1m50.718518851s)
Aug 23 18:05:25.622: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Failed (1m55.747844599s)
Aug 23 18:05:30.651: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Failed (2m0.77699409s)
Aug 23 18:05:35.681: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Failed (2m5.806853042s)
Aug 23 18:05:40.710: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Failed (2m10.836138196s)
Aug 23 18:05:45.740: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Failed (2m15.865777581s)
Aug 23 18:05:50.770: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e found and phase=Failed (2m20.895861187s)
Aug 23 18:05:55.799: INFO: PersistentVolume pvc-6cf967e6-5b3c-4059-be5c-36066e10047e was removed
Aug 23 18:05:55.799: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-3274 to be removed
Aug 23 18:05:55.827: INFO: Claim "azuredisk-3274" in namespace "pvc-dxmxr" doesn't exist in the system
Aug 23 18:05:55.827: INFO: deleting StorageClass azuredisk-3274-kubernetes.io-azure-disk-dynamic-sc-44vxm
Aug 23 18:05:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-3274" for this suite.
... skipping 21 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 23 18:05:56.802: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dn27j" in namespace "azuredisk-495" to be "Succeeded or Failed"
Aug 23 18:05:56.830: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 27.781655ms
Aug 23 18:05:58.864: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062308443s
Aug 23 18:06:00.894: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092235949s
Aug 23 18:06:02.927: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.125597337s
Aug 23 18:06:04.957: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155536238s
Aug 23 18:06:06.987: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.185390104s
Aug 23 18:06:09.016: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Pending", Reason="", readiness=false. Elapsed: 12.214182816s
Aug 23 18:06:11.049: INFO: Pod "azuredisk-volume-tester-dn27j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.247088914s
STEP: Saw pod success
Aug 23 18:06:11.049: INFO: Pod "azuredisk-volume-tester-dn27j" satisfied condition "Succeeded or Failed"
Aug 23 18:06:11.049: INFO: deleting Pod "azuredisk-495"/"azuredisk-volume-tester-dn27j"
Aug 23 18:06:11.092: INFO: Pod azuredisk-volume-tester-dn27j has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-dn27j in namespace azuredisk-495
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:06:11.205: INFO: deleting PVC "azuredisk-495"/"pvc-69hjl"
Aug 23 18:06:11.205: INFO: Deleting PersistentVolumeClaim "pvc-69hjl"
STEP: waiting for claim's PV "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" to be deleted
Aug 23 18:06:11.245: INFO: Waiting up to 10m0s for PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c to get deleted
Aug 23 18:06:11.277: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c found and phase=Released (31.211794ms)
Aug 23 18:06:16.306: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c found and phase=Failed (5.060347224s)
Aug 23 18:06:21.335: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c found and phase=Failed (10.089684732s)
Aug 23 18:06:26.365: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c found and phase=Failed (15.119519577s)
Aug 23 18:06:31.394: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c found and phase=Failed (20.148732953s)
Aug 23 18:06:36.423: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c found and phase=Failed (25.177960284s)
Aug 23 18:06:41.453: INFO: PersistentVolume pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c was removed
Aug 23 18:06:41.453: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-495 to be removed
Aug 23 18:06:41.481: INFO: Claim "azuredisk-495" in namespace "pvc-69hjl" doesn't exist in the system
Aug 23 18:06:41.481: INFO: deleting StorageClass azuredisk-495-kubernetes.io-azure-disk-dynamic-sc-stgg8
Aug 23 18:06:41.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-495" for this suite.
... skipping 21 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Aug 23 18:06:42.411: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-krprp" in namespace "azuredisk-9947" to be "Error status code"
Aug 23 18:06:42.440: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 28.381457ms
Aug 23 18:06:44.469: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057981782s
Aug 23 18:06:46.499: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088039387s
Aug 23 18:06:48.529: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118041106s
Aug 23 18:06:50.559: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.147753261s
Aug 23 18:06:52.589: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.177956316s
Aug 23 18:06:54.620: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 12.208264799s
Aug 23 18:06:56.649: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Pending", Reason="", readiness=false. Elapsed: 14.237631764s
Aug 23 18:06:58.680: INFO: Pod "azuredisk-volume-tester-krprp": Phase="Failed", Reason="", readiness=false. Elapsed: 16.268725272s
STEP: Saw pod failure
Aug 23 18:06:58.680: INFO: Pod "azuredisk-volume-tester-krprp" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Aug 23 18:06:58.723: INFO: deleting Pod "azuredisk-9947"/"azuredisk-volume-tester-krprp"
Aug 23 18:06:58.753: INFO: Pod azuredisk-volume-tester-krprp has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-krprp in namespace azuredisk-9947
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:06:58.849: INFO: deleting PVC "azuredisk-9947"/"pvc-6wszl"
Aug 23 18:06:58.849: INFO: Deleting PersistentVolumeClaim "pvc-6wszl"
STEP: waiting for claim's PV "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" to be deleted
Aug 23 18:06:58.879: INFO: Waiting up to 10m0s for PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a to get deleted
Aug 23 18:06:58.909: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a found and phase=Released (29.103666ms)
Aug 23 18:07:03.940: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a found and phase=Failed (5.060384495s)
Aug 23 18:07:08.973: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a found and phase=Failed (10.093585608s)
Aug 23 18:07:14.005: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a found and phase=Failed (15.125445378s)
Aug 23 18:07:19.038: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a found and phase=Failed (20.158616263s)
Aug 23 18:07:24.068: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a found and phase=Failed (25.188592931s)
Aug 23 18:07:29.097: INFO: PersistentVolume pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a was removed
Aug 23 18:07:29.097: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9947 to be removed
Aug 23 18:07:29.125: INFO: Claim "azuredisk-9947" in namespace "pvc-6wszl" doesn't exist in the system
Aug 23 18:07:29.125: INFO: deleting StorageClass azuredisk-9947-kubernetes.io-azure-disk-dynamic-sc-phv8k
Aug 23 18:07:29.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9947" for this suite.
... skipping 23 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod is running
Aug 23 18:12:30.156: INFO: deleting Pod "azuredisk-5541"/"azuredisk-volume-tester-c9fgz"
Aug 23 18:12:30.207: INFO: Error getting logs for pod azuredisk-volume-tester-c9fgz: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-c9fgz)
STEP: Deleting pod azuredisk-volume-tester-c9fgz in namespace azuredisk-5541
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:12:30.292: INFO: deleting PVC "azuredisk-5541"/"pvc-5ddt6"
Aug 23 18:12:30.292: INFO: Deleting PersistentVolumeClaim "pvc-5ddt6"
STEP: waiting for claim's PV "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" to be deleted
... skipping 17 lines ...
Aug 23 18:13:50.833: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Bound (1m20.503793738s)
Aug 23 18:13:55.864: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Bound (1m25.534294349s)
Aug 23 18:14:00.919: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Bound (1m30.590087786s)
Aug 23 18:14:05.948: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Bound (1m35.618397833s)
Aug 23 18:14:10.977: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Bound (1m40.647407991s)
Aug 23 18:14:16.005: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Bound (1m45.675782968s)
Aug 23 18:14:21.034: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (1m50.704277336s)
Aug 23 18:14:26.063: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (1m55.733718975s)
Aug 23 18:14:31.092: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (2m0.762719422s)
Aug 23 18:14:36.121: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (2m5.791556208s)
Aug 23 18:14:41.149: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (2m10.820020825s)
Aug 23 18:14:46.177: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (2m15.847898527s)
Aug 23 18:14:51.205: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 found and phase=Failed (2m20.875941245s)
Aug 23 18:14:56.234: INFO: PersistentVolume pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 was removed
Aug 23 18:14:56.234: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Aug 23 18:14:56.261: INFO: Claim "azuredisk-5541" in namespace "pvc-5ddt6" doesn't exist in the system
Aug 23 18:14:56.261: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-6c2w7
STEP: Collecting events from namespace "azuredisk-5541".
STEP: Found 9 events.
... skipping 81 lines ...
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:40
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:43
    should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:317

    Unexpected error:
        <*errors.errorString | 0xc0003de380>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

... skipping 50 lines ...
Aug 23 18:17:42.861: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Bound (5.058586704s)
Aug 23 18:17:47.889: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Bound (10.087192061s)
Aug 23 18:17:52.918: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Bound (15.116099454s)
Aug 23 18:17:57.949: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Bound (20.146665819s)
Aug 23 18:18:02.979: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Bound (25.176414541s)
Aug 23 18:18:08.007: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Bound (30.204929424s)
Aug 23 18:18:13.039: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Failed (35.23716664s)
Aug 23 18:18:18.068: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Failed (40.265703964s)
Aug 23 18:18:23.099: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Failed (45.296899714s)
Aug 23 18:18:28.129: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Failed (50.327183358s)
Aug 23 18:18:33.161: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Failed (55.359254007s)
Aug 23 18:18:38.191: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 found and phase=Failed (1m0.389201884s)
Aug 23 18:18:43.219: INFO: PersistentVolume pvc-ac703f54-7351-4dbe-93d4-c8293e512439 was removed
Aug 23 18:18:43.220: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Aug 23 18:18:43.247: INFO: Claim "azuredisk-5356" in namespace "pvc-htbwj" doesn't exist in the system
Aug 23 18:18:43.247: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-n4vwc
Aug 23 18:18:43.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 156 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 23 18:19:01.765: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-6546b" in namespace "azuredisk-8510" to be "Succeeded or Failed"
Aug 23 18:19:01.793: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.306851ms
Aug 23 18:19:03.824: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059452746s
Aug 23 18:19:05.853: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088690072s
Aug 23 18:19:07.882: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117807112s
Aug 23 18:19:09.916: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151021809s
Aug 23 18:19:11.944: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179660005s
... skipping 9 lines ...
Aug 23 18:19:32.236: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 30.471588742s
Aug 23 18:19:34.265: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.500097372s
Aug 23 18:19:36.294: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.529034061s
Aug 23 18:19:38.322: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Pending", Reason="", readiness=false. Elapsed: 36.55788472s
Aug 23 18:19:40.351: INFO: Pod "azuredisk-volume-tester-6546b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.586819296s
STEP: Saw pod success
Aug 23 18:19:40.351: INFO: Pod "azuredisk-volume-tester-6546b" satisfied condition "Succeeded or Failed"
Aug 23 18:19:40.351: INFO: deleting Pod "azuredisk-8510"/"azuredisk-volume-tester-6546b"
Aug 23 18:19:40.396: INFO: Pod azuredisk-volume-tester-6546b has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-6546b in namespace azuredisk-8510
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:19:40.500: INFO: deleting PVC "azuredisk-8510"/"pvc-cfrnh"
Aug 23 18:19:40.500: INFO: Deleting PersistentVolumeClaim "pvc-cfrnh"
STEP: waiting for claim's PV "pvc-afce054e-d667-4123-99f2-c78522ce11ea" to be deleted
Aug 23 18:19:40.535: INFO: Waiting up to 10m0s for PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea to get deleted
Aug 23 18:19:40.562: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Released (27.304336ms)
Aug 23 18:19:45.592: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (5.056963473s)
Aug 23 18:19:50.620: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (10.085784559s)
Aug 23 18:19:55.651: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (15.116658925s)
Aug 23 18:20:00.680: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (20.145594854s)
Aug 23 18:20:05.761: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (25.22661757s)
Aug 23 18:20:10.791: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (30.256485004s)
Aug 23 18:20:16.081: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (35.546671457s)
Aug 23 18:20:21.111: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea found and phase=Failed (40.576088592s)
Aug 23 18:20:26.153: INFO: PersistentVolume pvc-afce054e-d667-4123-99f2-c78522ce11ea was removed
Aug 23 18:20:26.153: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8510 to be removed
Aug 23 18:20:26.180: INFO: Claim "azuredisk-8510" in namespace "pvc-cfrnh" doesn't exist in the system
Aug 23 18:20:26.180: INFO: deleting StorageClass azuredisk-8510-kubernetes.io-azure-disk-dynamic-sc-8d2pm
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:20:26.270: INFO: deleting PVC "azuredisk-8510"/"pvc-xbvw9"
Aug 23 18:20:26.270: INFO: Deleting PersistentVolumeClaim "pvc-xbvw9"
STEP: waiting for claim's PV "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" to be deleted
Aug 23 18:20:26.300: INFO: Waiting up to 10m0s for PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 to get deleted
Aug 23 18:20:26.328: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 found and phase=Bound (28.275974ms)
Aug 23 18:20:31.357: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 found and phase=Failed (5.056650027s)
Aug 23 18:20:36.385: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 found and phase=Failed (10.08503259s)
Aug 23 18:20:41.414: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 found and phase=Failed (15.11437778s)
Aug 23 18:20:46.443: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 found and phase=Failed (20.143224156s)
Aug 23 18:20:51.472: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 found and phase=Failed (25.172340473s)
Aug 23 18:20:56.501: INFO: PersistentVolume pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 was removed
Aug 23 18:20:56.502: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8510 to be removed
Aug 23 18:20:56.530: INFO: Claim "azuredisk-8510" in namespace "pvc-xbvw9" doesn't exist in the system
Aug 23 18:20:56.530: INFO: deleting StorageClass azuredisk-8510-kubernetes.io-azure-disk-dynamic-sc-vpg5x
STEP: validating provisioned PV
STEP: checking the PV
... skipping 38 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 23 18:21:07.790: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qgqzs" in namespace "azuredisk-5561" to be "Succeeded or Failed"
Aug 23 18:21:07.818: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 28.467108ms
Aug 23 18:21:09.847: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057425028s
Aug 23 18:21:11.877: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087305313s
Aug 23 18:21:13.911: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.121042204s
Aug 23 18:21:15.940: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.150356944s
Aug 23 18:21:17.969: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179428827s
... skipping 9 lines ...
Aug 23 18:21:38.259: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 30.469296073s
Aug 23 18:21:40.292: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 32.502162521s
Aug 23 18:21:42.320: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 34.530339506s
Aug 23 18:21:44.349: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Pending", Reason="", readiness=false. Elapsed: 36.559546817s
Aug 23 18:21:46.379: INFO: Pod "azuredisk-volume-tester-qgqzs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.58905606s
STEP: Saw pod success
Aug 23 18:21:46.379: INFO: Pod "azuredisk-volume-tester-qgqzs" satisfied condition "Succeeded or Failed"
Aug 23 18:21:46.379: INFO: deleting Pod "azuredisk-5561"/"azuredisk-volume-tester-qgqzs"
Aug 23 18:21:46.421: INFO: Pod azuredisk-volume-tester-qgqzs has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.089332 seconds, 1.1GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Aug 23 18:21:46.518: INFO: deleting PVC "azuredisk-5561"/"pvc-fkfqn"
Aug 23 18:21:46.518: INFO: Deleting PersistentVolumeClaim "pvc-fkfqn"
STEP: waiting for claim's PV "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" to be deleted
Aug 23 18:21:46.549: INFO: Waiting up to 10m0s for PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 to get deleted
Aug 23 18:21:46.576: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Released (27.245844ms)
Aug 23 18:21:51.604: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (5.055399566s)
Aug 23 18:21:56.634: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (10.085071657s)
Aug 23 18:22:01.663: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (15.11436629s)
Aug 23 18:22:06.693: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (20.144281163s)
Aug 23 18:22:11.722: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (25.173229439s)
Aug 23 18:22:16.751: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (30.20232701s)
Aug 23 18:22:21.780: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (35.231287295s)
Aug 23 18:22:26.810: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (40.260819159s)
Aug 23 18:22:31.839: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (45.289977535s)
Aug 23 18:22:36.868: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 found and phase=Failed (50.318679601s)
Aug 23 18:22:41.897: INFO: PersistentVolume pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 was removed
Aug 23 18:22:41.897: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5561 to be removed
Aug 23 18:22:41.925: INFO: Claim "azuredisk-5561" in namespace "pvc-fkfqn" doesn't exist in the system
Aug 23 18:22:41.925: INFO: deleting StorageClass azuredisk-5561-kubernetes.io-azure-disk-dynamic-sc-2fh4z
STEP: validating provisioned PV
STEP: checking the PV
... skipping 94 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 23 18:22:54.709: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-dxfpd" in namespace "azuredisk-953" to be "Succeeded or Failed"
Aug 23 18:22:54.745: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.661869ms
Aug 23 18:22:56.775: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06600373s
Aug 23 18:22:58.803: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094613686s
Aug 23 18:23:00.833: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.124629287s
Aug 23 18:23:02.862: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.153724152s
Aug 23 18:23:04.892: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.183156258s
... skipping 10 lines ...
Aug 23 18:23:27.220: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 32.511151434s
Aug 23 18:23:29.252: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 34.54301812s
Aug 23 18:23:31.280: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 36.571528388s
Aug 23 18:23:33.309: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Pending", Reason="", readiness=false. Elapsed: 38.599748701s
Aug 23 18:23:35.338: INFO: Pod "azuredisk-volume-tester-dxfpd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.629303341s
STEP: Saw pod success
Aug 23 18:23:35.338: INFO: Pod "azuredisk-volume-tester-dxfpd" satisfied condition "Succeeded or Failed"
Aug 23 18:23:35.339: INFO: deleting Pod "azuredisk-953"/"azuredisk-volume-tester-dxfpd"
Aug 23 18:23:35.381: INFO: Pod azuredisk-volume-tester-dxfpd has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-dxfpd in namespace azuredisk-953
STEP: validating provisioned PV
STEP: checking the PV
Aug 23 18:23:35.478: INFO: deleting PVC "azuredisk-953"/"pvc-f8ssq"
Aug 23 18:23:35.478: INFO: Deleting PersistentVolumeClaim "pvc-f8ssq"
STEP: waiting for claim's PV "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" to be deleted
Aug 23 18:23:35.507: INFO: Waiting up to 10m0s for PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 to get deleted
Aug 23 18:23:35.535: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Released (27.773027ms)
Aug 23 18:23:40.565: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (5.05743393s)
Aug 23 18:23:45.594: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (10.086454041s)
Aug 23 18:23:50.623: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (15.115382389s)
Aug 23 18:23:55.652: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (20.144325211s)
Aug 23 18:24:00.681: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (25.173811833s)
Aug 23 18:24:05.710: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (30.202688767s)
Aug 23 18:24:10.739: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (35.23193065s)
Aug 23 18:24:15.769: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (40.261515981s)
Aug 23 18:24:20.799: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (45.291194393s)
Aug 23 18:24:25.828: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (50.320350584s)
Aug 23 18:24:30.858: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (55.350093646s)
Aug 23 18:24:35.886: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 found and phase=Failed (1m0.378919864s)
Aug 23 18:24:40.916: INFO: PersistentVolume pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75 was removed
Aug 23 18:24:40.916: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-953 to be removed
Aug 23 18:24:40.943: INFO: Claim "azuredisk-953" in namespace "pvc-f8ssq" doesn't exist in the system
Aug 23 18:24:40.944: INFO: deleting StorageClass azuredisk-953-kubernetes.io-azure-disk-dynamic-sc-ncw9n
STEP: validating provisioned PV
STEP: checking the PV
... skipping 309 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:263
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:158
STEP: Creating a kubernetes client
Aug 23 18:27:57.584: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
I0823 18:27:57.724299   33187 azuredisk_driver.go:57] Using azure disk driver: kubernetes.io/azure-disk
... skipping 2 lines ...

S [SKIPPING] [0.198 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:67
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:158

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:263
------------------------------
... skipping 244 lines ...
I0823 17:56:39.466733       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2021-08-23 17:49:11 +0000 UTC to 2031-08-21 17:54:11 +0000 UTC (now=2021-08-23 17:56:39.466689573 +0000 UTC))"
I0823 17:56:39.467060       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1629741398\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1629741398\" (2021-08-23 16:56:37 +0000 UTC to 2022-08-23 16:56:37 +0000 UTC (now=2021-08-23 17:56:39.467026476 +0000 UTC))"
I0823 17:56:39.467463       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1629741399\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1629741399\" (2021-08-23 16:56:38 +0000 UTC to 2022-08-23 16:56:38 +0000 UTC (now=2021-08-23 17:56:39.467431479 +0000 UTC))"
I0823 17:56:39.467546       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0823 17:56:39.467636       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0823 17:56:39.474249       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0823 17:56:43.065145       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0823 17:56:43.065175       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0823 17:56:45.388291       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0823 17:56:45.389178       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-tj2yec-control-plane-r8mns_fa1b4020-8bda-4164-b8dc-c5798d294c46 became leader"
W0823 17:56:45.428360       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0823 17:56:45.429475       1 azure_auth.go:232] Using AzurePublicCloud environment
I0823 17:56:45.429547       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0823 17:56:45.429917       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0823 17:56:45.432127       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0823 17:56:45.432528       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0823 17:56:45.433535       1 reflector.go:219] Starting reflector *v1.Secret (17h46m12.059521078s) from k8s.io/client-go/informers/factory.go:134
I0823 17:56:45.433680       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0823 17:56:45.434184       1 reflector.go:219] Starting reflector *v1.ServiceAccount (17h46m12.059521078s) from k8s.io/client-go/informers/factory.go:134
I0823 17:56:45.434207       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0823 17:56:45.458152       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0823 17:56:45.458533       1 controllermanager.go:562] Starting "tokencleaner"
I0823 17:56:45.465684       1 controllermanager.go:577] Started "tokencleaner"
I0823 17:56:45.465708       1 controllermanager.go:562] Starting "attachdetach"
I0823 17:56:45.465886       1 tokencleaner.go:118] Starting token cleaner controller
I0823 17:56:45.465905       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
I0823 17:56:45.465921       1 shared_informer.go:270] caches populated
... skipping 5 lines ...
I0823 17:56:45.481782       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0823 17:56:45.481843       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0823 17:56:45.481920       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0823 17:56:45.481988       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0823 17:56:45.482007       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0823 17:56:45.482030       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0823 17:56:45.482081       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0823 17:56:45.482095       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0823 17:56:45.482260       1 controllermanager.go:577] Started "attachdetach"
I0823 17:56:45.482282       1 controllermanager.go:562] Starting "root-ca-cert-publisher"
I0823 17:56:45.482581       1 attach_detach_controller.go:328] Starting attach detach controller
I0823 17:56:45.482602       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0823 17:56:45.489428       1 controllermanager.go:577] Started "root-ca-cert-publisher"
... skipping 122 lines ...
I0823 17:56:47.044018       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0823 17:56:47.044033       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0823 17:56:47.044045       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0823 17:56:47.044066       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0823 17:56:47.044118       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0823 17:56:47.044134       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0823 17:56:47.044170       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0823 17:56:47.044184       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0823 17:56:47.044248       1 controllermanager.go:577] Started "persistentvolume-binder"
I0823 17:56:47.044261       1 controllermanager.go:562] Starting "garbagecollector"
I0823 17:56:47.044321       1 pv_controller_base.go:308] Starting persistent volume controller
I0823 17:56:47.044333       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0823 17:56:47.299410       1 garbagecollector.go:142] Starting garbage collector controller
... skipping 397 lines ...
I0823 17:57:19.842343       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 17:57:19.844510       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 17:57:19.854114       1 pv_controller_base.go:528] resyncing PV controller
I0823 17:57:20.126588       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 17:57:26.349852       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="113.901µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:53314" resp=200
I0823 17:57:28.608549       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-control-plane-r8mns"
W0823 17:57:28.610937       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-tj2yec-control-plane-r8mns" does not exist
I0823 17:57:28.608912       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-tj2yec-control-plane-r8mns}
I0823 17:57:28.610906       1 controller.go:682] Ignoring node capz-tj2yec-control-plane-r8mns with Ready condition status False
I0823 17:57:28.611591       1 controller.go:269] Triggering nodeSync
I0823 17:57:28.611685       1 controller.go:288] nodeSync has been triggered
I0823 17:57:28.611799       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0823 17:57:28.611899       1 controller.go:779] Finished updateLoadBalancerHosts
... skipping 33 lines ...
I0823 17:57:30.950564       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0823 17:57:30.998344       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (114.702262ms)
I0823 17:57:30.999374       1 endpointslicemirroring_controller.go:274] syncEndpoints("kube-system/kube-dns")
I0823 17:57:31.000163       1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (793.607µs)
I0823 17:57:31.002955       1 daemon_controller.go:226] Adding daemon set kube-proxy
I0823 17:57:31.015600       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="149.620386ms"
I0823 17:57:31.015875       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0823 17:57:31.016065       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2021-08-23 17:57:31.016037767 +0000 UTC m=+53.425070624"
I0823 17:57:31.016981       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2021-08-23 17:57:30 +0000 UTC - now: 2021-08-23 17:57:31.016973975 +0000 UTC m=+53.426006832]
I0823 17:57:31.017139       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (133.024833ms)
I0823 17:57:31.027462       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="11.402706ms"
I0823 17:57:31.027774       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0823 17:57:31.027516       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2021-08-23 17:57:31.027496273 +0000 UTC m=+53.436529030"
... skipping 293 lines ...
I0823 17:58:04.262402       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-846b5f484d", timestamp:time.Time{wall:0xc041164af25ddc1a, ext:86254044751, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:04.262328       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/calico-kube-controllers-846b5f484d-qjd9w"
I0823 17:58:04.271031       1 controller_utils.go:581] Controller calico-kube-controllers-846b5f484d created pod calico-kube-controllers-846b5f484d-qjd9w
I0823 17:58:04.271277       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-846b5f484d, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0823 17:58:04.271923       1 event.go:291] "Event occurred" object="kube-system/calico-kube-controllers-846b5f484d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-846b5f484d-qjd9w"
I0823 17:58:04.274524       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="527.002924ms"
I0823 17:58:04.274576       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0823 17:58:04.274627       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-08-23 17:58:04.274596301 +0000 UTC m=+86.683629058"
I0823 17:58:04.275340       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2021-08-23 17:58:03 +0000 UTC - now: 2021-08-23 17:58:04.275334207 +0000 UTC m=+86.684367064]
I0823 17:58:04.347295       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-846b5f484d"
I0823 17:58:04.347795       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-846b5f484d" (502.937923ms)
I0823 17:58:04.347977       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-846b5f484d", timestamp:time.Time{wall:0xc041164af25ddc1a, ext:86254044751, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:04.348328       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-846b5f484d, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 2 lines ...
I0823 17:58:04.380627       1 disruption.go:433] updatePod "calico-kube-controllers-846b5f484d-qjd9w" -> PDB "calico-kube-controllers"
I0823 17:58:04.380821       1 replica_set.go:443] Pod calico-kube-controllers-846b5f484d-qjd9w updated, objectMeta {Name:calico-kube-controllers-846b5f484d-qjd9w GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:f2a96d4e-154f-43aa-a017-eaa226322ffa ResourceVersion:600 Generation:0 CreationTimestamp:2021-08-23 17:58:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:045e8a68-be18-4396-898f-337f106ed2c3 Controller:0xc000ce43ae BlockOwnerDeletion:0xc000ce43af}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-08-23 17:58:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"045e8a68-be18-4396-898f-337f106ed2c3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-846b5f484d-qjd9w GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:f2a96d4e-154f-43aa-a017-eaa226322ffa ResourceVersion:606 Generation:0 CreationTimestamp:2021-08-23 17:58:04 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:045e8a68-be18-4396-898f-337f106ed2c3 Controller:0xc000cb814e BlockOwnerDeletion:0xc000cb814f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-08-23 17:58:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"045e8a68-be18-4396-898f-337f106ed2c3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-08-23 17:58:04 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0823 17:58:04.381560       1 disruption.go:427] updatePod called on pod "calico-node-4l4mc"
I0823 17:58:04.381736       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-4l4mc, PodDisruptionBudget controller will avoid syncing.
I0823 17:58:04.382113       1 disruption.go:430] No matching pdb for pod "calico-node-4l4mc"
I0823 17:58:04.381922       1 daemon_controller.go:570] Pod calico-node-4l4mc updated.
E0823 17:58:04.382477       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.382581       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.382729       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.383154       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.384549       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.384698       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.387119       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.387133       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.387155       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.387686       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.387829       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.387961       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.388387       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.390161       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.390307       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.390797       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.390946       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.391063       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.391449       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.392032       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.392770       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.393191       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.394025       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.394194       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.396839       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.396944       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.397066       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.397436       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.398736       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.398966       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.399355       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.400184       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.400336       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:04.404728       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:04.404740       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:04.404763       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:04.439364       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="164.743484ms"
I0823 17:58:04.439788       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-08-23 17:58:04.439769089 +0000 UTC m=+86.848801846"
I0823 17:58:04.439721       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0823 17:58:04.440855       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2021-08-23 17:58:03 +0000 UTC - now: 2021-08-23 17:58:04.440846598 +0000 UTC m=+86.849879355]
I0823 17:58:04.440980       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 598s
I0823 17:58:04.441135       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="1.353011ms"
... skipping 222 lines ...
I0823 17:58:34.888228       1 node_lifecycle_controller.go:869] Node capz-tj2yec-control-plane-r8mns is NotReady as of 2021-08-23 17:58:34.888207741 +0000 UTC m=+117.297240498. Adding it to the Taint queue.
I0823 17:58:36.627449       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-control-plane-r8mns), assuming it is managed by availability set: not a vmss instance
I0823 17:58:37.378440       1 disruption.go:427] updatePod called on pod "calico-node-4l4mc"
I0823 17:58:37.378730       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-4l4mc, PodDisruptionBudget controller will avoid syncing.
I0823 17:58:37.378844       1 disruption.go:430] No matching pdb for pod "calico-node-4l4mc"
I0823 17:58:37.378963       1 daemon_controller.go:570] Pod calico-node-4l4mc updated.
E0823 17:58:37.379879       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.380217       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.380383       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:37.381329       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041164b2598a942, ext:87039794551, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:37.381766       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041165356c10856, ext:119790782191, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:37.382584       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:37.382813       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:37.382945       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041165356c10856, ext:119790782191, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:37.383318       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041165356d8eb9d, ext:119792347602, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:37.383473       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:37.383718       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:37.383990       1 daemon_controller.go:1102] Updating daemon set status
E0823 17:58:37.383914       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.384294       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.384405       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:37.384872       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (5.784144ms)
E0823 17:58:37.385187       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.385294       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.385466       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.385836       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.385976       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.386105       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.386489       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.386595       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.386741       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.387115       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.387256       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.387407       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.387777       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.387882       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.388028       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.388414       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.388517       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.388675       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.389059       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.389173       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.389335       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.389707       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.389802       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.389922       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.390299       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.390388       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.390501       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:37.390846       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:37.390939       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:37.391064       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:38.418528       1 disruption.go:427] updatePod called on pod "calico-node-4l4mc"
I0823 17:58:38.418699       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-4l4mc, PodDisruptionBudget controller will avoid syncing.
I0823 17:58:38.418730       1 disruption.go:430] No matching pdb for pod "calico-node-4l4mc"
I0823 17:58:38.418820       1 daemon_controller.go:570] Pod calico-node-4l4mc updated.
E0823 17:58:38.420407       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.423224       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.423302       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:38.423520       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041165356d8eb9d, ext:119792347602, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:38.423668       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411653994099e5, ext:120832696958, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:38.423719       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:38.423812       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:38.423841       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411653994099e5, ext:120832696958, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:38.423918       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041165399446a13, ext:120832946860, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:38.423973       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:38.424018       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:38.424077       1 daemon_controller.go:1102] Updating daemon set status
I0823 17:58:38.424156       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (5.256943ms)
E0823 17:58:38.424414       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.424455       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.424489       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.424778       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.424812       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.424847       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.425117       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.425160       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.425191       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.427426       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.427435       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.427453       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.427664       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.427670       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.427686       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.427948       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.427985       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.428016       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.428253       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.428289       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.428320       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.428579       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.428624       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.428654       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.428896       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.428969       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.429025       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.429318       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.429354       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.429393       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:38.429703       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:38.429746       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:38.429787       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:39.419546       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="80µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:53648" resp=200
I0823 17:58:39.888649       1 node_lifecycle_controller.go:869] Node capz-tj2yec-control-plane-r8mns is NotReady as of 2021-08-23 17:58:39.88862967 +0000 UTC m=+122.297662427. Adding it to the Taint queue.
I0823 17:58:40.234067       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-control-plane-r8mns"
I0823 17:58:41.708111       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-control-plane-r8mns), assuming it is managed by availability set: not a vmss instance
I0823 17:58:41.708213       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-control-plane-r8mns), assuming it is managed by availability set: not a vmss instance
I0823 17:58:44.889523       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-control-plane-r8mns ReadyCondition updated. Updating timestamp.
... skipping 7 lines ...
I0823 17:58:49.890146       1 gc_controller.go:161] GC'ing orphaned
I0823 17:58:49.890173       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 17:58:49.890211       1 node_lifecycle_controller.go:869] Node capz-tj2yec-control-plane-r8mns is NotReady as of 2021-08-23 17:58:49.890198151 +0000 UTC m=+132.299231008. Adding it to the Taint queue.
I0823 17:58:49.937525       1 disruption.go:427] updatePod called on pod "calico-node-4l4mc"
I0823 17:58:49.938313       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-4l4mc, PodDisruptionBudget controller will avoid syncing.
I0823 17:58:49.938330       1 disruption.go:430] No matching pdb for pod "calico-node-4l4mc"
E0823 17:58:49.938134       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.938345       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
I0823 17:58:49.938234       1 daemon_controller.go:570] Pod calico-node-4l4mc updated.
E0823 17:58:49.939249       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.939593       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.939605       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.939632       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:49.939941       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc041165399446a13, ext:120832946860, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:49.940034       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04116567807b830, ext:132349062857, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:49.940053       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:49.940163       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:49.940175       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04116567807b830, ext:132349062857, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:49.940230       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411656780abb49, ext:132349260258, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:49.940247       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:49.940291       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:49.940312       1 daemon_controller.go:1102] Updating daemon set status
I0823 17:58:49.940352       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (1.837613ms)
E0823 17:58:49.940706       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.940727       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.940756       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.943281       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.943298       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.943324       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.943664       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.943752       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.943787       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.947282       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.947298       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.947327       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.947742       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.947812       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.947895       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.951377       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.951458       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.951552       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.951960       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.952068       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.952161       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.955399       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.955422       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.955449       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.955819       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.955838       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.955929       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:49.959294       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:49.959314       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:49.959340       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:50.637192       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 17:58:50.805722       1 disruption.go:427] updatePod called on pod "calico-node-4l4mc"
E0823 17:58:50.806122       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.806141       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.806174       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:50.806316       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-4l4mc, PodDisruptionBudget controller will avoid syncing.
I0823 17:58:50.806379       1 disruption.go:430] No matching pdb for pod "calico-node-4l4mc"
I0823 17:58:50.805957       1 daemon_controller.go:570] Pod calico-node-4l4mc updated.
E0823 17:58:50.807435       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.807456       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.807619       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:50.807707       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411656780abb49, ext:132349260258, loc:(*time.Location)(0x7505dc0)}}
E0823 17:58:50.808121       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.808537       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.808573       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:50.808291       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411656b02d6d60, ext:133217316245, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:50.808863       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:50.808986       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:50.809003       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411656b02d6d60, ext:133217316245, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:50.809077       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0411656b0397dae, ext:133218106851, loc:(*time.Location)(0x7505dc0)}}
I0823 17:58:50.809098       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0823 17:58:50.809143       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0823 17:58:50.809171       1 daemon_controller.go:1102] Updating daemon set status
I0823 17:58:50.809213       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.74932ms)
E0823 17:58:50.809428       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.809442       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.809489       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.809853       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.809867       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.809968       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.810308       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.810323       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.810350       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.810678       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.810754       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.811057       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.811390       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.811405       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.811428       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.811771       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.811784       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.811806       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.812113       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.812127       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.812221       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.812481       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.812495       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.812521       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0823 17:58:50.812861       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0823 17:58:50.812874       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0823 17:58:50.812898       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0823 17:58:51.710118       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-control-plane-r8mns), assuming it is managed by availability set: not a vmss instance
I0823 17:58:51.710213       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-control-plane-r8mns), assuming it is managed by availability set: not a vmss instance
I0823 17:58:52.824836       1 disruption.go:427] updatePod called on pod "calico-node-4l4mc"
I0823 17:58:52.825297       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-4l4mc, PodDisruptionBudget controller will avoid syncing.
I0823 17:58:52.825469       1 disruption.go:430] No matching pdb for pod "calico-node-4l4mc"
I0823 17:58:52.825713       1 daemon_controller.go:570] Pod calico-node-4l4mc updated.
... skipping 188 lines ...
I0823 17:59:03.239953       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0823 17:59:03.240006       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2021-08-23 17:59:03.239981035 +0000 UTC m=+145.649013892"
I0823 17:59:03.241163       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.162805ms"
I0823 17:59:03.918596       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (442.902µs)
I0823 17:59:04.855062       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 17:59:04.866277       1 pv_controller_base.go:528] resyncing PV controller
I0823 17:59:04.892503       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-tj2yec-control-plane-r8mns transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-08-23 17:58:40 +0000 UTC,LastTransitionTime:2021-08-23 17:57:02 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-23 17:59:00 +0000 UTC,LastTransitionTime:2021-08-23 17:59:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0823 17:59:04.892682       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-control-plane-r8mns ReadyCondition updated. Updating timestamp.
I0823 17:59:04.892879       1 node_lifecycle_controller.go:893] Node capz-tj2yec-control-plane-r8mns is healthy again, removing all taints
I0823 17:59:04.893066       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0823 17:59:04.906270       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-846b5f484d-qjd9w"
I0823 17:59:04.906543       1 disruption.go:433] updatePod "calico-kube-controllers-846b5f484d-qjd9w" -> PDB "calico-kube-controllers"
I0823 17:59:04.906771       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (80.7µs)
... skipping 62 lines ...
I0823 17:59:44.230668       1 certificate_controller.go:87] Updating certificate request csr-kf4c5
I0823 17:59:44.230802       1 certificate_controller.go:173] Finished syncing certificate request "csr-kf4c5" (1µs)
I0823 17:59:44.230695       1 certificate_controller.go:87] Updating certificate request csr-kf4c5
I0823 17:59:44.230817       1 certificate_controller.go:173] Finished syncing certificate request "csr-kf4c5" (600ns)
I0823 17:59:44.230827       1 certificate_controller.go:173] Finished syncing certificate request "csr-kf4c5" (600ns)
I0823 17:59:49.239818       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-792q5"
W0823 17:59:49.239850       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-tj2yec-md-0-792q5" does not exist
I0823 17:59:49.240828       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-tj2yec-md-0-792q5}
I0823 17:59:49.240861       1 taint_manager.go:440] "Updating known taints on node" node="capz-tj2yec-md-0-792q5" taints=[]
I0823 17:59:49.240948       1 controller.go:682] Ignoring node capz-tj2yec-md-0-792q5 with Ready condition status False
I0823 17:59:49.240963       1 controller.go:269] Triggering nodeSync
I0823 17:59:49.240971       1 controller.go:288] nodeSync has been triggered
I0823 17:59:49.241002       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
... skipping 268 lines ...
I0823 18:00:09.892750       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:00:09.900171       1 controller.go:269] Triggering nodeSync
I0823 18:00:09.900199       1 controller.go:288] nodeSync has been triggered
I0823 18:00:09.900207       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0823 18:00:09.900218       1 controller.go:779] Finished updateLoadBalancerHosts
I0823 18:00:09.900225       1 controller.go:720] It took 1.84e-05 seconds to finish nodeSyncInternal
I0823 18:00:09.901456       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-tj2yec-md-0-792q5 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-08-23 17:59:59 +0000 UTC,LastTransitionTime:2021-08-23 17:59:49 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-23 18:00:09 +0000 UTC,LastTransitionTime:2021-08-23 18:00:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0823 18:00:09.901541       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-792q5 ReadyCondition updated. Updating timestamp.
I0823 18:00:09.911106       1 node_lifecycle_controller.go:893] Node capz-tj2yec-md-0-792q5 is healthy again, removing all taints
I0823 18:00:09.911381       1 node_lifecycle_controller.go:1214] Controller detected that zone eastus::0 is now in state Normal.
I0823 18:00:09.914368       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-792q5"
I0823 18:00:09.914541       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-tj2yec-md-0-792q5}
I0823 18:00:09.914635       1 taint_manager.go:440] "Updating known taints on node" node="capz-tj2yec-md-0-792q5" taints=[]
I0823 18:00:09.914719       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-tj2yec-md-0-792q5"
I0823 18:00:10.234336       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-q9lvi8" (16.1µs)
I0823 18:00:12.905731       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-hbpcn"
W0823 18:00:12.905763       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-tj2yec-md-0-hbpcn" does not exist
I0823 18:00:12.906619       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0411667193a07bb, ext:198832266224, loc:(*time.Location)(0x7505dc0)}}
I0823 18:00:12.906929       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc041166b360e8b1f, ext:215315955640, loc:(*time.Location)(0x7505dc0)}}
I0823 18:00:12.907690       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-tj2yec-md-0-hbpcn], creating 1
I0823 18:00:12.907371       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-tj2yec-md-0-hbpcn}
I0823 18:00:12.908581       1 taint_manager.go:440] "Updating known taints on node" node="capz-tj2yec-md-0-hbpcn" taints=[]
I0823 18:00:12.907402       1 controller.go:682] Ignoring node capz-tj2yec-md-0-hbpcn with Ready condition status False
... skipping 300 lines ...
I0823 18:00:53.063888       1 controller.go:779] Finished updateLoadBalancerHosts
I0823 18:00:53.063894       1 controller.go:737] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0823 18:00:53.063901       1 controller.go:720] It took 3.03e-05 seconds to finish nodeSyncInternal
I0823 18:00:53.084699       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-hbpcn"
I0823 18:00:53.085585       1 controller_utils.go:221] Made sure that Node capz-tj2yec-md-0-hbpcn has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0823 18:00:53.295875       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-1nmfwi" (16.5µs)
I0823 18:00:54.920767       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-tj2yec-md-0-hbpcn transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-08-23 18:00:43 +0000 UTC,LastTransitionTime:2021-08-23 18:00:12 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-23 18:00:53 +0000 UTC,LastTransitionTime:2021-08-23 18:00:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0823 18:00:54.920833       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-hbpcn ReadyCondition updated. Updating timestamp.
I0823 18:00:54.933315       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-hbpcn"
I0823 18:00:54.934329       1 node_lifecycle_controller.go:893] Node capz-tj2yec-md-0-hbpcn is healthy again, removing all taints
I0823 18:00:54.936050       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-tj2yec-md-0-hbpcn}
I0823 18:00:54.936847       1 taint_manager.go:440] "Updating known taints on node" node="capz-tj2yec-md-0-hbpcn" taints=[]
I0823 18:00:54.937004       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-tj2yec-md-0-hbpcn"
... skipping 304 lines ...
I0823 18:02:37.891631       1 pv_controller.go:1108] reclaimVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: policy is Delete
I0823 18:02:37.891815       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]]
I0823 18:02:37.891988       1 pv_controller.go:1763] operation "delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]" is already running, skipping
I0823 18:02:37.891144       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31] started
I0823 18:02:37.894994       1 pv_controller.go:1340] isVolumeReleased[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is released
I0823 18:02:37.895014       1 pv_controller.go:1404] doDeleteVolume [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]
I0823 18:02:37.927337       1 pv_controller.go:1259] deletion of volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:02:37.927370       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: set phase Failed
I0823 18:02:37.927382       1 pv_controller.go:858] updating PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: set phase Failed
I0823 18:02:37.932181       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" with version 1277
I0823 18:02:37.932838       1 pv_controller.go:879] volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" entered phase "Failed"
I0823 18:02:37.933055       1 pv_controller.go:901] volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:02:37.932736       1 pv_protection_controller.go:205] Got event on PV pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31
I0823 18:02:37.932787       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" with version 1277
I0823 18:02:37.933194       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: phase: Failed, bound to: "azuredisk-8081/pvc-b2ccq (uid: c0020c43-3bee-4a6f-8397-4186a7bc7e31)", boundByController: true
I0823 18:02:37.933245       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is bound to claim azuredisk-8081/pvc-b2ccq
I0823 18:02:37.933308       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: claim azuredisk-8081/pvc-b2ccq not found
I0823 18:02:37.933319       1 pv_controller.go:1108] reclaimVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: policy is Delete
I0823 18:02:37.933337       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]]
I0823 18:02:37.933373       1 pv_controller.go:1763] operation "delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]" is already running, skipping
E0823 18:02:37.933504       1 goroutinemap.go:150] Operation for "delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]" failed. No retries permitted until 2021-08-23 18:02:38.43347845 +0000 UTC m=+360.842511307 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:02:37.933691       1 event.go:291] "Event occurred" object="pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted"
I0823 18:02:39.419327       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="124.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55956" resp=200
I0823 18:02:42.834513       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 0 items received
I0823 18:02:45.440849       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 90 items received
I0823 18:02:46.948220       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 10 items received
I0823 18:02:47.248832       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-hbpcn"
... skipping 10 lines ...
I0823 18:02:49.048033       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 0 items received
I0823 18:02:49.419935       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="70.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56058" resp=200
I0823 18:02:49.862562       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:02:49.864714       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:02:49.875891       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:02:49.875971       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" with version 1277
I0823 18:02:49.876018       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: phase: Failed, bound to: "azuredisk-8081/pvc-b2ccq (uid: c0020c43-3bee-4a6f-8397-4186a7bc7e31)", boundByController: true
I0823 18:02:49.876064       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is bound to claim azuredisk-8081/pvc-b2ccq
I0823 18:02:49.876084       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: claim azuredisk-8081/pvc-b2ccq not found
I0823 18:02:49.876093       1 pv_controller.go:1108] reclaimVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: policy is Delete
I0823 18:02:49.876111       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]]
I0823 18:02:49.876146       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31] started
I0823 18:02:49.880718       1 pv_controller.go:1340] isVolumeReleased[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is released
I0823 18:02:49.880745       1 pv_controller.go:1404] doDeleteVolume [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]
I0823 18:02:49.880784       1 pv_controller.go:1259] deletion of volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) since it's in attaching or detaching state
I0823 18:02:49.880800       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: set phase Failed
I0823 18:02:49.880810       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: phase Failed already set
E0823 18:02:49.880839       1 goroutinemap.go:150] Operation for "delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]" failed. No retries permitted until 2021-08-23 18:02:50.880819729 +0000 UTC m=+373.289852486 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) since it's in attaching or detaching state
I0823 18:02:49.905017       1 gc_controller.go:161] GC'ing orphaned
I0823 18:02:49.905063       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:02:49.957728       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-hbpcn ReadyCondition updated. Updating timestamp.
I0823 18:02:50.775946       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:02:51.844063       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Endpoints total 11 items received
I0823 18:02:59.418076       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="79.801µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56156" resp=200
I0823 18:03:02.759462       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-hbpcn) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) returned with <nil>
I0823 18:03:02.759508       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31) succeeded
I0823 18:03:02.759520       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31 was detached from node:capz-tj2yec-md-0-hbpcn
I0823 18:03:02.759546       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31") on node "capz-tj2yec-md-0-hbpcn" 
I0823 18:03:04.865118       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:03:04.876278       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:03:04.876366       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" with version 1277
I0823 18:03:04.876446       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: phase: Failed, bound to: "azuredisk-8081/pvc-b2ccq (uid: c0020c43-3bee-4a6f-8397-4186a7bc7e31)", boundByController: true
I0823 18:03:04.876521       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is bound to claim azuredisk-8081/pvc-b2ccq
I0823 18:03:04.876550       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: claim azuredisk-8081/pvc-b2ccq not found
I0823 18:03:04.876565       1 pv_controller.go:1108] reclaimVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: policy is Delete
I0823 18:03:04.876615       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]]
I0823 18:03:04.876653       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31] started
I0823 18:03:04.886181       1 pv_controller.go:1340] isVolumeReleased[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is released
... skipping 4 lines ...
I0823 18:03:10.097309       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31
I0823 18:03:10.097550       1 pv_controller.go:1435] volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" deleted
I0823 18:03:10.097579       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: success
I0823 18:03:10.106672       1 pv_protection_controller.go:205] Got event on PV pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31
I0823 18:03:10.106738       1 pv_protection_controller.go:125] Processing PV pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31
I0823 18:03:10.107111       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" with version 1325
I0823 18:03:10.107165       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: phase: Failed, bound to: "azuredisk-8081/pvc-b2ccq (uid: c0020c43-3bee-4a6f-8397-4186a7bc7e31)", boundByController: true
I0823 18:03:10.107819       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: volume is bound to claim azuredisk-8081/pvc-b2ccq
I0823 18:03:10.107880       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: claim azuredisk-8081/pvc-b2ccq not found
I0823 18:03:10.107899       1 pv_controller.go:1108] reclaimVolume[pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31]: policy is Delete
I0823 18:03:10.107916       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31[fa85b39a-0a9d-4cf1-ab0e-3f71e5e13106]]
I0823 18:03:10.108008       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31] started
I0823 18:03:10.111837       1 pv_controller.go:1243] Volume "pvc-c0020c43-3bee-4a6f-8397-4186a7bc7e31" is already being deleted
... skipping 128 lines ...
I0823 18:03:18.685042       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e") from node "capz-tj2yec-md-0-792q5" 
I0823 18:03:18.735387       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" to node "capz-tj2yec-md-0-792q5".
I0823 18:03:18.757575       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" lun 0 to node "capz-tj2yec-md-0-792q5".
I0823 18:03:18.757620       1 azure_controller_standard.go:93] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - attach disk(capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) with DiskEncryptionSetID()
I0823 18:03:19.093971       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1318
I0823 18:03:19.137440       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1318, name default-token-cbddz, uid 769a579a-e83d-49a8-9420-0ed9c00d8979, event type delete
E0823 18:03:19.154940       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1318/default: secrets "default-token-rckwm" is forbidden: unable to create new content in namespace azuredisk-1318 because it is being terminated
I0823 18:03:19.200980       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1318, name kube-root-ca.crt, uid d8057d97-e8cd-4428-9388-a62fedbf7d04, event type delete
I0823 18:03:19.204626       1 publisher.go:186] Finished syncing namespace "azuredisk-1318" (3.890529ms)
I0823 18:03:19.236113       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1318, name default, uid 7218f5c8-e45e-4bd6-a7cb-747899d8f22f, event type delete
I0823 18:03:19.236208       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1318/default), service account deleted, removing tokens
I0823 18:03:19.237124       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1318" (3.9µs)
I0823 18:03:19.255634       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1318, estimate: 0, errors: <nil>
... skipping 25 lines ...
I0823 18:03:19.877456       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: claim azuredisk-3274/pvc-dxmxr found: phase: Bound, bound to: "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e", bindCompleted: true, boundByController: true
I0823 18:03:19.877473       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: all is bound
I0823 18:03:19.877503       1 pv_controller.go:858] updating PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: set phase Bound
I0823 18:03:19.877515       1 pv_controller.go:861] updating PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: phase Bound already set
I0823 18:03:19.894424       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-694
I0823 18:03:19.973673       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-694, name default-token-2j99g, uid d99ffbbf-7316-4ad8-b23d-773e3d02afc5, event type delete
E0823 18:03:20.006286       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-694/default: secrets "default-token-wq565" is forbidden: unable to create new content in namespace azuredisk-694 because it is being terminated
I0823 18:03:20.025877       1 tokens_controller.go:252] syncServiceAccount(azuredisk-694/default), service account deleted, removing tokens
I0823 18:03:20.025941       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-694, name default, uid af02bf31-daab-4cba-832f-1c2718243091, event type delete
I0823 18:03:20.025976       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-694" (1.6µs)
I0823 18:03:20.058342       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-694, name kube-root-ca.crt, uid 4952dca5-a1b9-4c63-b210-d00d723459cd, event type delete
I0823 18:03:20.060945       1 publisher.go:186] Finished syncing namespace "azuredisk-694" (2.835521ms)
I0823 18:03:20.070938       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-694, estimate: 0, errors: <nil>
... skipping 360 lines ...
I0823 18:05:23.092691       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6cf967e6-5b3c-4059-be5c-36066e10047e] started
I0823 18:05:23.093118       1 pv_controller.go:1108] reclaimVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: policy is Delete
I0823 18:05:23.093141       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]]
I0823 18:05:23.093148       1 pv_controller.go:1763] operation "delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]" is already running, skipping
I0823 18:05:23.095302       1 pv_controller.go:1340] isVolumeReleased[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is released
I0823 18:05:23.095319       1 pv_controller.go:1404] doDeleteVolume [pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]
I0823 18:05:23.153100       1 pv_controller.go:1259] deletion of volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:05:23.153136       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: set phase Failed
I0823 18:05:23.153147       1 pv_controller.go:858] updating PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: set phase Failed
I0823 18:05:23.160713       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" with version 1589
I0823 18:05:23.160757       1 pv_controller.go:879] volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" entered phase "Failed"
I0823 18:05:23.160770       1 pv_controller.go:901] volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
E0823 18:05:23.160817       1 goroutinemap.go:150] Operation for "delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]" failed. No retries permitted until 2021-08-23 18:05:23.660794525 +0000 UTC m=+526.069827382 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:05:23.161275       1 pv_protection_controller.go:205] Got event on PV pvc-6cf967e6-5b3c-4059-be5c-36066e10047e
I0823 18:05:23.161318       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" with version 1589
I0823 18:05:23.161404       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: phase: Failed, bound to: "azuredisk-3274/pvc-dxmxr (uid: 6cf967e6-5b3c-4059-be5c-36066e10047e)", boundByController: true
I0823 18:05:23.161611       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is bound to claim azuredisk-3274/pvc-dxmxr
I0823 18:05:23.161663       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: claim azuredisk-3274/pvc-dxmxr not found
I0823 18:05:23.161677       1 pv_controller.go:1108] reclaimVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: policy is Delete
I0823 18:05:23.161697       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]]
I0823 18:05:23.161709       1 pv_controller.go:1765] operation "delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]" postponed due to exponential backoff
I0823 18:05:23.162040       1 event.go:291] "Event occurred" object="pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted"
... skipping 12 lines ...
I0823 18:05:31.823705       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e from node "capz-tj2yec-md-0-792q5"
I0823 18:05:31.886133       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e"
I0823 18:05:31.886170       1 azure_controller_standard.go:166] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e)
I0823 18:05:34.871579       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:05:34.882716       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:05:34.882785       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" with version 1589
I0823 18:05:34.882829       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: phase: Failed, bound to: "azuredisk-3274/pvc-dxmxr (uid: 6cf967e6-5b3c-4059-be5c-36066e10047e)", boundByController: true
I0823 18:05:34.882876       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is bound to claim azuredisk-3274/pvc-dxmxr
I0823 18:05:34.882895       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: claim azuredisk-3274/pvc-dxmxr not found
I0823 18:05:34.882904       1 pv_controller.go:1108] reclaimVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: policy is Delete
I0823 18:05:34.882924       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]]
I0823 18:05:34.882953       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6cf967e6-5b3c-4059-be5c-36066e10047e] started
I0823 18:05:34.893094       1 pv_controller.go:1340] isVolumeReleased[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is released
I0823 18:05:34.893119       1 pv_controller.go:1404] doDeleteVolume [pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]
I0823 18:05:34.893446       1 pv_controller.go:1259] deletion of volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) since it's in attaching or detaching state
I0823 18:05:34.893553       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: set phase Failed
I0823 18:05:34.893664       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: phase Failed already set
E0823 18:05:34.893778       1 goroutinemap.go:150] Operation for "delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]" failed. No retries permitted until 2021-08-23 18:05:35.89375096 +0000 UTC m=+538.302783717 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) since it's in attaching or detaching state
I0823 18:05:34.987849       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-792q5 ReadyCondition updated. Updating timestamp.
I0823 18:05:39.418962       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="85.4µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57692" resp=200
I0823 18:05:41.002359       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 0 items received
I0823 18:05:42.847493       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 10 items received
I0823 18:05:46.824557       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0823 18:05:47.391931       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e) returned with <nil>
... skipping 2 lines ...
I0823 18:05:47.392013       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e") on node "capz-tj2yec-md-0-792q5" 
I0823 18:05:49.418473       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="67.5µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57792" resp=200
I0823 18:05:49.867210       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:05:49.872344       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:05:49.883518       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:05:49.883648       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" with version 1589
I0823 18:05:49.883782       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: phase: Failed, bound to: "azuredisk-3274/pvc-dxmxr (uid: 6cf967e6-5b3c-4059-be5c-36066e10047e)", boundByController: true
I0823 18:05:49.883869       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is bound to claim azuredisk-3274/pvc-dxmxr
I0823 18:05:49.883949       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: claim azuredisk-3274/pvc-dxmxr not found
I0823 18:05:49.883964       1 pv_controller.go:1108] reclaimVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: policy is Delete
I0823 18:05:49.883984       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]]
I0823 18:05:49.884065       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6cf967e6-5b3c-4059-be5c-36066e10047e] started
I0823 18:05:49.894225       1 pv_controller.go:1340] isVolumeReleased[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is released
... skipping 5 lines ...
I0823 18:05:55.223157       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e
I0823 18:05:55.223328       1 pv_controller.go:1435] volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" deleted
I0823 18:05:55.223409       1 pv_controller.go:1283] deleteVolumeOperation [pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: success
I0823 18:05:55.229660       1 pv_protection_controller.go:205] Got event on PV pvc-6cf967e6-5b3c-4059-be5c-36066e10047e
I0823 18:05:55.229699       1 pv_protection_controller.go:125] Processing PV pvc-6cf967e6-5b3c-4059-be5c-36066e10047e
I0823 18:05:55.229806       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6cf967e6-5b3c-4059-be5c-36066e10047e" with version 1638
I0823 18:05:55.229853       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: phase: Failed, bound to: "azuredisk-3274/pvc-dxmxr (uid: 6cf967e6-5b3c-4059-be5c-36066e10047e)", boundByController: true
I0823 18:05:55.229895       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: volume is bound to claim azuredisk-3274/pvc-dxmxr
I0823 18:05:55.229918       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: claim azuredisk-3274/pvc-dxmxr not found
I0823 18:05:55.229929       1 pv_controller.go:1108] reclaimVolume[pvc-6cf967e6-5b3c-4059-be5c-36066e10047e]: policy is Delete
I0823 18:05:55.229946       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]]
I0823 18:05:55.229954       1 pv_controller.go:1763] operation "delete-pvc-6cf967e6-5b3c-4059-be5c-36066e10047e[4565480a-e8bd-4f0f-9e14-01b740b98ff8]" is already running, skipping
I0823 18:05:55.235077       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-6cf967e6-5b3c-4059-be5c-36066e10047e
... skipping 248 lines ...
I0823 18:06:11.247681       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: claim azuredisk-495/pvc-69hjl not found
I0823 18:06:11.247686       1 pv_controller.go:1108] reclaimVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: policy is Delete
I0823 18:06:11.247697       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]]
I0823 18:06:11.247702       1 pv_controller.go:1763] operation "delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]" is already running, skipping
I0823 18:06:11.251516       1 pv_controller.go:1340] isVolumeReleased[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is released
I0823 18:06:11.251534       1 pv_controller.go:1404] doDeleteVolume [pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]
I0823 18:06:11.274640       1 pv_controller.go:1259] deletion of volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:06:11.274665       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: set phase Failed
I0823 18:06:11.274676       1 pv_controller.go:858] updating PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: set phase Failed
I0823 18:06:11.279140       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" with version 1724
I0823 18:06:11.279213       1 pv_controller.go:879] volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" entered phase "Failed"
I0823 18:06:11.279226       1 pv_controller.go:901] volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
E0823 18:06:11.279289       1 goroutinemap.go:150] Operation for "delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]" failed. No retries permitted until 2021-08-23 18:06:11.779267623 +0000 UTC m=+574.188300480 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:06:11.279474       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" with version 1724
I0823 18:06:11.279610       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: phase: Failed, bound to: "azuredisk-495/pvc-69hjl (uid: 2fcc8d57-3335-44bc-8366-2b93f2d3a64c)", boundByController: true
I0823 18:06:11.279145       1 pv_protection_controller.go:205] Got event on PV pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c
I0823 18:06:11.279819       1 event.go:291] "Event occurred" object="pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted"
I0823 18:06:11.279956       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is bound to claim azuredisk-495/pvc-69hjl
I0823 18:06:11.280102       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: claim azuredisk-495/pvc-69hjl not found
I0823 18:06:11.280280       1 pv_controller.go:1108] reclaimVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: policy is Delete
I0823 18:06:11.280435       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]]
... skipping 13 lines ...
I0823 18:06:17.484887       1 azure_controller_standard.go:166] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-hbpcn) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c)
I0823 18:06:19.417742       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="85.401µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58084" resp=200
I0823 18:06:19.867384       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:06:19.873514       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:06:19.884679       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:06:19.884782       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" with version 1724
I0823 18:06:19.884997       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: phase: Failed, bound to: "azuredisk-495/pvc-69hjl (uid: 2fcc8d57-3335-44bc-8366-2b93f2d3a64c)", boundByController: true
I0823 18:06:19.885124       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is bound to claim azuredisk-495/pvc-69hjl
I0823 18:06:19.885153       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: claim azuredisk-495/pvc-69hjl not found
I0823 18:06:19.885165       1 pv_controller.go:1108] reclaimVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: policy is Delete
I0823 18:06:19.885243       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]]
I0823 18:06:19.885355       1 pv_controller.go:1231] deleteVolumeOperation [pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c] started
I0823 18:06:19.894481       1 pv_controller.go:1340] isVolumeReleased[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is released
I0823 18:06:19.894500       1 pv_controller.go:1404] doDeleteVolume [pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]
I0823 18:06:19.894596       1 pv_controller.go:1259] deletion of volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c) since it's in attaching or detaching state
I0823 18:06:19.894662       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: set phase Failed
I0823 18:06:19.894767       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: phase Failed already set
E0823 18:06:19.894810       1 goroutinemap.go:150] Operation for "delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]" failed. No retries permitted until 2021-08-23 18:06:20.894784807 +0000 UTC m=+583.303817664 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c) since it's in attaching or detaching state
I0823 18:06:19.995126       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-hbpcn ReadyCondition updated. Updating timestamp.
I0823 18:06:20.906353       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:06:21.850051       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0823 18:06:24.746277       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 22 items received
I0823 18:06:25.861092       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 0 items received
I0823 18:06:26.824149       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
... skipping 6 lines ...
I0823 18:06:32.898117       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c was detached from node:capz-tj2yec-md-0-hbpcn
I0823 18:06:32.898148       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c") on node "capz-tj2yec-md-0-hbpcn" 
I0823 18:06:33.863064       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 0 items received
I0823 18:06:34.874503       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:06:34.885665       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:06:34.885796       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" with version 1724
I0823 18:06:34.885955       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: phase: Failed, bound to: "azuredisk-495/pvc-69hjl (uid: 2fcc8d57-3335-44bc-8366-2b93f2d3a64c)", boundByController: true
I0823 18:06:34.886049       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is bound to claim azuredisk-495/pvc-69hjl
I0823 18:06:34.886181       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: claim azuredisk-495/pvc-69hjl not found
I0823 18:06:34.886196       1 pv_controller.go:1108] reclaimVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: policy is Delete
I0823 18:06:34.886216       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]]
I0823 18:06:34.886281       1 pv_controller.go:1231] deleteVolumeOperation [pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c] started
I0823 18:06:34.892603       1 pv_controller.go:1340] isVolumeReleased[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is released
... skipping 2 lines ...
I0823 18:06:40.095811       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c
I0823 18:06:40.095852       1 pv_controller.go:1435] volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" deleted
I0823 18:06:40.095899       1 pv_controller.go:1283] deleteVolumeOperation [pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: success
I0823 18:06:40.110561       1 pv_protection_controller.go:205] Got event on PV pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c
I0823 18:06:40.110700       1 pv_protection_controller.go:125] Processing PV pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c
I0823 18:06:40.110581       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" with version 1769
I0823 18:06:40.110829       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: phase: Failed, bound to: "azuredisk-495/pvc-69hjl (uid: 2fcc8d57-3335-44bc-8366-2b93f2d3a64c)", boundByController: true
I0823 18:06:40.110861       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: volume is bound to claim azuredisk-495/pvc-69hjl
I0823 18:06:40.110881       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: claim azuredisk-495/pvc-69hjl not found
I0823 18:06:40.110890       1 pv_controller.go:1108] reclaimVolume[pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c]: policy is Delete
I0823 18:06:40.110904       1 pv_controller.go:1752] scheduleOperation[delete-pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c[34dac6f7-6d70-49ae-91a0-d6a92f16ed7f]]
I0823 18:06:40.110928       1 pv_controller.go:1231] deleteVolumeOperation [pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c] started
I0823 18:06:40.116335       1 pv_controller.go:1243] Volume "pvc-2fcc8d57-3335-44bc-8366-2b93f2d3a64c" is already being deleted
... skipping 251 lines ...
I0823 18:06:58.894729       1 pv_controller.go:1108] reclaimVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: policy is Delete
I0823 18:06:58.894838       1 pv_controller.go:1752] scheduleOperation[delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]]
I0823 18:06:58.894855       1 pv_controller.go:1763] operation "delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]" is already running, skipping
I0823 18:06:58.894971       1 pv_controller.go:1231] deleteVolumeOperation [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a] started
I0823 18:06:58.896855       1 pv_controller.go:1340] isVolumeReleased[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is released
I0823 18:06:58.896871       1 pv_controller.go:1404] doDeleteVolume [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]
I0823 18:06:58.920013       1 pv_controller.go:1259] deletion of volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:06:58.920039       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: set phase Failed
I0823 18:06:58.920049       1 pv_controller.go:858] updating PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: set phase Failed
I0823 18:06:58.924373       1 pv_protection_controller.go:205] Got event on PV pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a
I0823 18:06:58.924679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" with version 1853
I0823 18:06:58.925085       1 pv_controller.go:879] volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" entered phase "Failed"
I0823 18:06:58.925209       1 pv_controller.go:901] volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:06:58.924719       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" with version 1853
E0823 18:06:58.925380       1 goroutinemap.go:150] Operation for "delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]" failed. No retries permitted until 2021-08-23 18:06:59.425244254 +0000 UTC m=+621.834277111 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:06:58.925388       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: phase: Failed, bound to: "azuredisk-9947/pvc-6wszl (uid: 079f7425-1ffb-40e8-a65e-3d14ba90204a)", boundByController: true
I0823 18:06:58.925576       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is bound to claim azuredisk-9947/pvc-6wszl
I0823 18:06:58.925692       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: claim azuredisk-9947/pvc-6wszl not found
I0823 18:06:58.925440       1 event.go:291] "Event occurred" object="pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted"
I0823 18:06:58.925795       1 pv_controller.go:1108] reclaimVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: policy is Delete
I0823 18:06:58.926013       1 pv_controller.go:1752] scheduleOperation[delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]]
I0823 18:06:58.926056       1 pv_controller.go:1765] operation "delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]" postponed due to exponential backoff
... skipping 8 lines ...
I0823 18:07:02.507281       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a from node "capz-tj2yec-md-0-792q5"
I0823 18:07:02.507332       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a"
I0823 18:07:02.507539       1 azure_controller_standard.go:166] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a)
I0823 18:07:04.875162       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:07:04.887325       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:07:04.887407       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" with version 1853
I0823 18:07:04.887472       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: phase: Failed, bound to: "azuredisk-9947/pvc-6wszl (uid: 079f7425-1ffb-40e8-a65e-3d14ba90204a)", boundByController: true
I0823 18:07:04.887508       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is bound to claim azuredisk-9947/pvc-6wszl
I0823 18:07:04.887526       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: claim azuredisk-9947/pvc-6wszl not found
I0823 18:07:04.887536       1 pv_controller.go:1108] reclaimVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: policy is Delete
I0823 18:07:04.887553       1 pv_controller.go:1752] scheduleOperation[delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]]
I0823 18:07:04.887584       1 pv_controller.go:1231] deleteVolumeOperation [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a] started
I0823 18:07:04.895732       1 pv_controller.go:1340] isVolumeReleased[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is released
I0823 18:07:04.895754       1 pv_controller.go:1404] doDeleteVolume [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]
I0823 18:07:04.895793       1 pv_controller.go:1259] deletion of volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) since it's in attaching or detaching state
I0823 18:07:04.895809       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: set phase Failed
I0823 18:07:04.895820       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: phase Failed already set
E0823 18:07:04.895850       1 goroutinemap.go:150] Operation for "delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]" failed. No retries permitted until 2021-08-23 18:07:05.895830943 +0000 UTC m=+628.304863800 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) since it's in attaching or detaching state
I0823 18:07:05.001651       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-792q5 ReadyCondition updated. Updating timestamp.
I0823 18:07:08.671872       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0823 18:07:09.418568       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="69.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58566" resp=200
I0823 18:07:09.912701       1 gc_controller.go:161] GC'ing orphaned
I0823 18:07:09.912853       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:07:17.873348       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a) returned with <nil>
... skipping 2 lines ...
I0823 18:07:17.873633       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a") on node "capz-tj2yec-md-0-792q5" 
I0823 18:07:19.428314       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="179.702µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58660" resp=200
I0823 18:07:19.868746       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:07:19.875978       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:07:19.887883       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:07:19.887950       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" with version 1853
I0823 18:07:19.887996       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: phase: Failed, bound to: "azuredisk-9947/pvc-6wszl (uid: 079f7425-1ffb-40e8-a65e-3d14ba90204a)", boundByController: true
I0823 18:07:19.888042       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is bound to claim azuredisk-9947/pvc-6wszl
I0823 18:07:19.888065       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: claim azuredisk-9947/pvc-6wszl not found
I0823 18:07:19.888071       1 pv_controller.go:1108] reclaimVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: policy is Delete
I0823 18:07:19.888084       1 pv_controller.go:1752] scheduleOperation[delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]]
I0823 18:07:19.888109       1 pv_controller.go:1231] deleteVolumeOperation [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a] started
I0823 18:07:19.898585       1 pv_controller.go:1340] isVolumeReleased[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is released
... skipping 6 lines ...
I0823 18:07:25.104517       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a
I0823 18:07:25.104557       1 pv_controller.go:1435] volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" deleted
I0823 18:07:25.104597       1 pv_controller.go:1283] deleteVolumeOperation [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: success
I0823 18:07:25.111750       1 pv_protection_controller.go:205] Got event on PV pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a
I0823 18:07:25.111782       1 pv_protection_controller.go:125] Processing PV pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a
I0823 18:07:25.111840       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" with version 1896
I0823 18:07:25.111873       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: phase: Failed, bound to: "azuredisk-9947/pvc-6wszl (uid: 079f7425-1ffb-40e8-a65e-3d14ba90204a)", boundByController: true
I0823 18:07:25.112107       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: volume is bound to claim azuredisk-9947/pvc-6wszl
I0823 18:07:25.112130       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: claim azuredisk-9947/pvc-6wszl not found
I0823 18:07:25.112138       1 pv_controller.go:1108] reclaimVolume[pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a]: policy is Delete
I0823 18:07:25.112414       1 pv_controller.go:1752] scheduleOperation[delete-pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a[3609b773-abbe-4531-978e-f7c1a0ded30d]]
I0823 18:07:25.112592       1 pv_controller.go:1231] deleteVolumeOperation [pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a] started
I0823 18:07:25.116461       1 pv_controller.go:1243] Volume "pvc-079f7425-1ffb-40e8-a65e-3d14ba90204a" is already being deleted
... skipping 1107 lines ...
I0823 18:14:19.679714       1 pv_controller.go:1108] reclaimVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: policy is Delete
I0823 18:14:19.679723       1 pv_controller.go:1752] scheduleOperation[delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]]
I0823 18:14:19.679729       1 pv_controller.go:1763] operation "delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]" is already running, skipping
I0823 18:14:19.681564       1 actual_state_of_world.go:427] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9 on node "capz-tj2yec-md-0-hbpcn"
I0823 18:14:19.683041       1 pv_controller.go:1340] isVolumeReleased[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is released
I0823 18:14:19.683062       1 pv_controller.go:1404] doDeleteVolume [pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]
I0823 18:14:19.741409       1 pv_controller.go:1259] deletion of volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:14:19.741441       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: set phase Failed
I0823 18:14:19.741451       1 pv_controller.go:858] updating PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: set phase Failed
I0823 18:14:19.746014       1 pv_protection_controller.go:205] Got event on PV pvc-794b46bd-8097-4936-b3a4-677616ffe8b9
I0823 18:14:19.746041       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" with version 2537
I0823 18:14:19.746054       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" with version 2537
I0823 18:14:19.746064       1 pv_controller.go:879] volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" entered phase "Failed"
I0823 18:14:19.746080       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: phase: Failed, bound to: "azuredisk-5541/pvc-5ddt6 (uid: 794b46bd-8097-4936-b3a4-677616ffe8b9)", boundByController: true
I0823 18:14:19.746074       1 pv_controller.go:901] volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:14:19.746107       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is bound to claim azuredisk-5541/pvc-5ddt6
I0823 18:14:19.746127       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: claim azuredisk-5541/pvc-5ddt6 not found
E0823 18:14:19.746133       1 goroutinemap.go:150] Operation for "delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]" failed. No retries permitted until 2021-08-23 18:14:20.246112499 +0000 UTC m=+1062.655145256 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:14:19.746134       1 pv_controller.go:1108] reclaimVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: policy is Delete
I0823 18:14:19.746150       1 pv_controller.go:1752] scheduleOperation[delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]]
I0823 18:14:19.746158       1 pv_controller.go:1765] operation "delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]" postponed due to exponential backoff
I0823 18:14:19.746206       1 event.go:291] "Event occurred" object="pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted"
I0823 18:14:19.880202       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:14:19.897130       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:14:19.906272       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:14:19.906390       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" with version 2537
I0823 18:14:19.906530       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: phase: Failed, bound to: "azuredisk-5541/pvc-5ddt6 (uid: 794b46bd-8097-4936-b3a4-677616ffe8b9)", boundByController: true
I0823 18:14:19.906663       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is bound to claim azuredisk-5541/pvc-5ddt6
I0823 18:14:19.906690       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: claim azuredisk-5541/pvc-5ddt6 not found
I0823 18:14:19.906704       1 pv_controller.go:1108] reclaimVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: policy is Delete
I0823 18:14:19.906724       1 pv_controller.go:1752] scheduleOperation[delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]]
I0823 18:14:19.906751       1 pv_controller.go:1765] operation "delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]" postponed due to exponential backoff
I0823 18:14:20.081262       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-control-plane-r8mns ReadyCondition updated. Updating timestamp.
... skipping 13 lines ...
I0823 18:14:29.932542       1 gc_controller.go:161] GC'ing orphaned
I0823 18:14:29.932583       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:14:30.082802       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-hbpcn ReadyCondition updated. Updating timestamp.
I0823 18:14:34.897253       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:14:34.907394       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:14:34.907550       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" with version 2537
I0823 18:14:34.907654       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: phase: Failed, bound to: "azuredisk-5541/pvc-5ddt6 (uid: 794b46bd-8097-4936-b3a4-677616ffe8b9)", boundByController: true
I0823 18:14:34.907755       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is bound to claim azuredisk-5541/pvc-5ddt6
I0823 18:14:34.907782       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: claim azuredisk-5541/pvc-5ddt6 not found
I0823 18:14:34.907844       1 pv_controller.go:1108] reclaimVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: policy is Delete
I0823 18:14:34.907870       1 pv_controller.go:1752] scheduleOperation[delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]]
I0823 18:14:34.907911       1 pv_controller.go:1231] deleteVolumeOperation [pvc-794b46bd-8097-4936-b3a4-677616ffe8b9] started
I0823 18:14:34.913931       1 pv_controller.go:1340] isVolumeReleased[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is released
I0823 18:14:34.913949       1 pv_controller.go:1404] doDeleteVolume [pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]
I0823 18:14:34.913990       1 pv_controller.go:1259] deletion of volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) since it's in attaching or detaching state
I0823 18:14:34.914006       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: set phase Failed
I0823 18:14:34.914017       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: phase Failed already set
E0823 18:14:34.914045       1 goroutinemap.go:150] Operation for "delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]" failed. No retries permitted until 2021-08-23 18:14:35.914026361 +0000 UTC m=+1078.323059218 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) since it's in attaching or detaching state
I0823 18:14:39.420079       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="86.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34678" resp=200
I0823 18:14:39.873705       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 10 items received
I0823 18:14:40.305071       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0823 18:14:42.067815       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 7 items received
I0823 18:14:43.524138       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-hbpcn) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) returned with <nil>
I0823 18:14:43.524189       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9) succeeded
... skipping 2 lines ...
I0823 18:14:49.419072       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="116.401µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34774" resp=200
I0823 18:14:49.850520       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 0 items received
I0823 18:14:49.881270       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:14:49.897975       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:14:49.908221       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:14:49.908290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" with version 2537
I0823 18:14:49.908338       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: phase: Failed, bound to: "azuredisk-5541/pvc-5ddt6 (uid: 794b46bd-8097-4936-b3a4-677616ffe8b9)", boundByController: true
I0823 18:14:49.908383       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is bound to claim azuredisk-5541/pvc-5ddt6
I0823 18:14:49.908409       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: claim azuredisk-5541/pvc-5ddt6 not found
I0823 18:14:49.908423       1 pv_controller.go:1108] reclaimVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: policy is Delete
I0823 18:14:49.908442       1 pv_controller.go:1752] scheduleOperation[delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]]
I0823 18:14:49.908481       1 pv_controller.go:1231] deleteVolumeOperation [pvc-794b46bd-8097-4936-b3a4-677616ffe8b9] started
I0823 18:14:49.911693       1 pv_controller.go:1340] isVolumeReleased[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is released
... skipping 4 lines ...
I0823 18:14:55.148875       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9
I0823 18:14:55.148915       1 pv_controller.go:1435] volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" deleted
I0823 18:14:55.148928       1 pv_controller.go:1283] deleteVolumeOperation [pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: success
I0823 18:14:55.168544       1 pv_protection_controller.go:205] Got event on PV pvc-794b46bd-8097-4936-b3a4-677616ffe8b9
I0823 18:14:55.168576       1 pv_protection_controller.go:125] Processing PV pvc-794b46bd-8097-4936-b3a4-677616ffe8b9
I0823 18:14:55.168578       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" with version 2589
I0823 18:14:55.168612       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: phase: Failed, bound to: "azuredisk-5541/pvc-5ddt6 (uid: 794b46bd-8097-4936-b3a4-677616ffe8b9)", boundByController: true
I0823 18:14:55.168653       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: volume is bound to claim azuredisk-5541/pvc-5ddt6
I0823 18:14:55.168671       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: claim azuredisk-5541/pvc-5ddt6 not found
I0823 18:14:55.168678       1 pv_controller.go:1108] reclaimVolume[pvc-794b46bd-8097-4936-b3a4-677616ffe8b9]: policy is Delete
I0823 18:14:55.168695       1 pv_controller.go:1752] scheduleOperation[delete-pvc-794b46bd-8097-4936-b3a4-677616ffe8b9[9f4351c6-07b1-4b39-ab05-a696c9644fc4]]
I0823 18:14:55.168717       1 pv_controller.go:1231] deleteVolumeOperation [pvc-794b46bd-8097-4936-b3a4-677616ffe8b9] started
I0823 18:14:55.173767       1 pv_controller.go:1243] Volume "pvc-794b46bd-8097-4936-b3a4-677616ffe8b9" is already being deleted
... skipping 33 lines ...
I0823 18:14:58.209209       1 replica_set.go:380] Pod azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t", GenerateName:"azuredisk-volume-tester-hljpq-6bfdb9657f-", Namespace:"azuredisk-5356", SelfLink:"", UID:"29e9a226-700b-4988-a061-65a25b3424f9", ResourceVersion:"2609", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63765339298, loc:(*time.Location)(0x7505dc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azuredisk-volume-tester-1598098976185383115", "pod-template-hash":"6bfdb9657f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azuredisk-volume-tester-hljpq-6bfdb9657f", UID:"ff9a0f78-9f67-45bf-bed9-4f3f8f5b4d2e", Controller:(*bool)(0xc0020fb527), BlockOwnerDeletion:(*bool)(0xc0020fb528)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001f12180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001f121c8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc001f121e0), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-xqgll", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002c620e0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 3600; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-xqgll", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020fb5f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003bc310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020fb650)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020fb670)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020fb678), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020fb67c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0021720a0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0823 18:14:58.209555       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f", timestamp:time.Time{wall:0xc04117488afb2f18, ext:1100593266509, loc:(*time.Location)(0x7505dc0)}}
I0823 18:14:58.209716       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t"
I0823 18:14:58.210075       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t"
I0823 18:14:58.210234       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 8 items received
I0823 18:14:58.216060       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-hljpq" duration="47.725221ms"
I0823 18:14:58.216097       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-hljpq" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-hljpq\": the object has been modified; please apply your changes to the latest version and try again"
I0823 18:14:58.216137       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-hljpq" startTime="2021-08-23 18:14:58.216115333 +0000 UTC m=+1100.625148190"
I0823 18:14:58.216561       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-hljpq" timed out (false) [last progress check: 2021-08-23 18:14:58 +0000 UTC - now: 2021-08-23 18:14:58.216555237 +0000 UTC m=+1100.625587994]
I0823 18:14:58.216868       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f" (32.82779ms)
I0823 18:14:58.216897       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f", timestamp:time.Time{wall:0xc04117488afb2f18, ext:1100593266509, loc:(*time.Location)(0x7505dc0)}}
I0823 18:14:58.216958       1 replica_set_utils.go:59] Updating status for : azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0823 18:14:58.217380       1 pvc_protection_controller.go:353] "Got event on PVC" azuredisk-5356/pvc-htbwj="(MISSING)"
... skipping 256 lines ...
I0823 18:15:17.949096       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-hljpq" duration="442.203µs"
I0823 18:15:17.952696       1 replica_set_utils.go:59] Updating status for : azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0823 18:15:17.955580       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f" (14.409207ms)
I0823 18:15:17.955730       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f", timestamp:time.Time{wall:0xc041174d75270a4a, ext:1120300783743, loc:(*time.Location)(0x7505dc0)}}
I0823 18:15:17.955856       1 controller_utils.go:948] Ignoring inactive pod azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t in state Running, deletion time 2021-08-23 18:15:47 +0000 UTC
I0823 18:15:17.955956       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f" (176.801µs)
W0823 18:15:17.981471       1 reconciler.go:376] Multi-Attach error for volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439") from node "capz-tj2yec-md-0-hbpcn" Volume is already used by pods azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t on node capz-tj2yec-md-0-792q5
I0823 18:15:17.981581       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-hljpq-6bfdb9657f-5dmb7" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-ac703f54-7351-4dbe-93d4-c8293e512439\" Volume is already used by pod(s) azuredisk-volume-tester-hljpq-6bfdb9657f-6r99t"
I0823 18:15:19.418988       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="78.001µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35072" resp=200
I0823 18:15:19.749249       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 14 items received
I0823 18:15:19.882233       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:15:19.899854       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:15:19.910028       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:15:19.910111       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" with version 2623
... skipping 556 lines ...
I0823 18:18:09.125768       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]]
I0823 18:18:09.125775       1 pv_controller.go:1763] operation "delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]" is already running, skipping
I0823 18:18:09.125799       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ac703f54-7351-4dbe-93d4-c8293e512439] started
I0823 18:18:09.132435       1 pv_controller.go:1340] isVolumeReleased[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is released
I0823 18:18:09.132453       1 pv_controller.go:1404] doDeleteVolume [pvc-ac703f54-7351-4dbe-93d4-c8293e512439]
I0823 18:18:09.135806       1 actual_state_of_world.go:427] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439 on node "capz-tj2yec-md-0-hbpcn"
I0823 18:18:09.166029       1 pv_controller.go:1259] deletion of volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:18:09.166056       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: set phase Failed
I0823 18:18:09.166069       1 pv_controller.go:858] updating PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: set phase Failed
I0823 18:18:09.171093       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" with version 2967
I0823 18:18:09.171133       1 pv_controller.go:879] volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" entered phase "Failed"
I0823 18:18:09.171146       1 pv_controller.go:901] volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
E0823 18:18:09.171448       1 goroutinemap.go:150] Operation for "delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]" failed. No retries permitted until 2021-08-23 18:18:09.671177756 +0000 UTC m=+1292.080210613 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:18:09.171843       1 event.go:291] "Event occurred" object="pvc-ac703f54-7351-4dbe-93d4-c8293e512439" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted"
I0823 18:18:09.172332       1 pv_protection_controller.go:205] Got event on PV pvc-ac703f54-7351-4dbe-93d4-c8293e512439
I0823 18:18:09.172482       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" with version 2967
I0823 18:18:09.172643       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: phase: Failed, bound to: "azuredisk-5356/pvc-htbwj (uid: ac703f54-7351-4dbe-93d4-c8293e512439)", boundByController: true
I0823 18:18:09.172764       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is bound to claim azuredisk-5356/pvc-htbwj
I0823 18:18:09.172864       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: claim azuredisk-5356/pvc-htbwj not found
I0823 18:18:09.172950       1 pv_controller.go:1108] reclaimVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: policy is Delete
I0823 18:18:09.173041       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]]
I0823 18:18:09.173126       1 pv_controller.go:1765] operation "delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]" postponed due to exponential backoff
I0823 18:18:09.419285       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="99µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36714" resp=200
... skipping 12 lines ...
I0823 18:18:18.324298       1 azure_controller_standard.go:166] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-hbpcn) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439)
I0823 18:18:19.419761       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="71.1µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36806" resp=200
I0823 18:18:19.886841       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:18:19.909519       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:18:19.920677       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:18:19.920804       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" with version 2967
I0823 18:18:19.920911       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: phase: Failed, bound to: "azuredisk-5356/pvc-htbwj (uid: ac703f54-7351-4dbe-93d4-c8293e512439)", boundByController: true
I0823 18:18:19.920965       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is bound to claim azuredisk-5356/pvc-htbwj
I0823 18:18:19.921038       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: claim azuredisk-5356/pvc-htbwj not found
I0823 18:18:19.921049       1 pv_controller.go:1108] reclaimVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: policy is Delete
I0823 18:18:19.921106       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]]
I0823 18:18:19.921188       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ac703f54-7351-4dbe-93d4-c8293e512439] started
I0823 18:18:19.928072       1 pv_controller.go:1340] isVolumeReleased[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is released
I0823 18:18:19.928096       1 pv_controller.go:1404] doDeleteVolume [pvc-ac703f54-7351-4dbe-93d4-c8293e512439]
I0823 18:18:19.928393       1 pv_controller.go:1259] deletion of volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) since it's in attaching or detaching state
I0823 18:18:19.928413       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: set phase Failed
I0823 18:18:19.928534       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: phase Failed already set
E0823 18:18:19.928642       1 goroutinemap.go:150] Operation for "delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]" failed. No retries permitted until 2021-08-23 18:18:20.928552847 +0000 UTC m=+1303.337585604 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) since it's in attaching or detaching state
I0823 18:18:20.120225       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-hbpcn ReadyCondition updated. Updating timestamp.
I0823 18:18:21.389987       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:18:26.890678       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 7 items received
I0823 18:18:28.110213       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 9 items received
I0823 18:18:29.418567       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="87.901µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36902" resp=200
I0823 18:18:29.909409       1 controller.go:269] Triggering nodeSync
... skipping 8 lines ...
I0823 18:18:33.824416       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439) succeeded
I0823 18:18:33.824564       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439 was detached from node:capz-tj2yec-md-0-hbpcn
I0823 18:18:33.824709       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439") on node "capz-tj2yec-md-0-hbpcn" 
I0823 18:18:34.910643       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0823 18:18:34.921858       1 pv_controller_base.go:528] resyncing PV controller
I0823 18:18:34.922095       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" with version 2967
I0823 18:18:34.922150       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: phase: Failed, bound to: "azuredisk-5356/pvc-htbwj (uid: ac703f54-7351-4dbe-93d4-c8293e512439)", boundByController: true
I0823 18:18:34.922275       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is bound to claim azuredisk-5356/pvc-htbwj
I0823 18:18:34.922348       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: claim azuredisk-5356/pvc-htbwj not found
I0823 18:18:34.922368       1 pv_controller.go:1108] reclaimVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: policy is Delete
I0823 18:18:34.922407       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]]
I0823 18:18:34.922463       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ac703f54-7351-4dbe-93d4-c8293e512439] started
I0823 18:18:34.936667       1 pv_controller.go:1340] isVolumeReleased[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is released
... skipping 2 lines ...
I0823 18:18:40.224483       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-ac703f54-7351-4dbe-93d4-c8293e512439
I0823 18:18:40.224518       1 pv_controller.go:1435] volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" deleted
I0823 18:18:40.224708       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: success
I0823 18:18:40.234173       1 pv_protection_controller.go:205] Got event on PV pvc-ac703f54-7351-4dbe-93d4-c8293e512439
I0823 18:18:40.234213       1 pv_protection_controller.go:125] Processing PV pvc-ac703f54-7351-4dbe-93d4-c8293e512439
I0823 18:18:40.234637       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" with version 3015
I0823 18:18:40.234747       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: phase: Failed, bound to: "azuredisk-5356/pvc-htbwj (uid: ac703f54-7351-4dbe-93d4-c8293e512439)", boundByController: true
I0823 18:18:40.234829       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: volume is bound to claim azuredisk-5356/pvc-htbwj
I0823 18:18:40.234883       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: claim azuredisk-5356/pvc-htbwj not found
I0823 18:18:40.234895       1 pv_controller.go:1108] reclaimVolume[pvc-ac703f54-7351-4dbe-93d4-c8293e512439]: policy is Delete
I0823 18:18:40.234930       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ac703f54-7351-4dbe-93d4-c8293e512439[30474238-6cbb-4557-9c52-358b6cf643d0]]
I0823 18:18:40.234998       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ac703f54-7351-4dbe-93d4-c8293e512439] started
I0823 18:18:40.240301       1 pv_controller.go:1243] Volume "pvc-ac703f54-7351-4dbe-93d4-c8293e512439" is already being deleted
... skipping 287 lines ...
I0823 18:19:01.777746       1 pv_controller.go:1763] operation "provision-azuredisk-8510/pvc-cfrnh[afce054e-d667-4123-99f2-c78522ce11ea]" is already running, skipping
I0823 18:19:01.777224       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8510/pvc-xbvw9" with version 3127
I0823 18:19:01.777384       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea StorageAccountType:StandardSSD_LRS Size:10
I0823 18:19:01.780311       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 StorageAccountType:StandardSSD_LRS Size:10
I0823 18:19:03.529984       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3090
I0823 18:19:03.551551       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3090, name default-token-4s9rp, uid d3ab2ace-c97a-4046-8529-2de3e994a351, event type delete
E0823 18:19:03.567432       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3090/default: secrets "default-token-46nqr" is forbidden: unable to create new content in namespace azuredisk-3090 because it is being terminated
I0823 18:19:03.580094       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3090, name pvc-67lvq.169e025fff723383, uid b917797a-d8e8-4d66-be5a-3f6533c32fe9, event type delete
I0823 18:19:03.586476       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3090, name kube-root-ca.crt, uid af67fd0e-ba84-4989-8be3-f25e2dacc177, event type delete
I0823 18:19:03.591442       1 publisher.go:186] Finished syncing namespace "azuredisk-3090" (5.28899ms)
I0823 18:19:03.695774       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3090/default), service account deleted, removing tokens
I0823 18:19:03.695834       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3090, name default, uid 016b64b8-3d24-441e-ba7d-3b6378d177f2, event type delete
I0823 18:19:03.695864       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3090" (1.9µs)
... skipping 114 lines ...
I0823 18:19:04.245961       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8510/pvc-cfrnh] status: phase Bound already set
I0823 18:19:04.246385       1 pv_controller.go:1038] volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" bound to claim "azuredisk-8510/pvc-cfrnh"
I0823 18:19:04.246522       1 pv_controller.go:1039] volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" status after binding: phase: Bound, bound to: "azuredisk-8510/pvc-cfrnh (uid: afce054e-d667-4123-99f2-c78522ce11ea)", boundByController: true
I0823 18:19:04.246648       1 pv_controller.go:1040] claim "azuredisk-8510/pvc-cfrnh" status after binding: phase: Bound, bound to: "pvc-afce054e-d667-4123-99f2-c78522ce11ea", bindCompleted: true, boundByController: true
I0823 18:19:04.337721       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4078
I0823 18:19:04.377236       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4078, name default-token-656hh, uid 63d809e8-4cc2-4b00-9a33-e8a90b525df9, event type delete
E0823 18:19:04.409582       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4078/default: secrets "default-token-577rj" is forbidden: unable to create new content in namespace azuredisk-4078 because it is being terminated
I0823 18:19:04.429721       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 StorageAccountType:StandardSSD_LRS Size:10
I0823 18:19:04.435337       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4078/default), service account deleted, removing tokens
I0823 18:19:04.435535       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4078, name default, uid 964e602a-532f-46dc-ac93-51b5a656269e, event type delete
I0823 18:19:04.435696       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4078" (2.5µs)
I0823 18:19:04.452626       1 azure_managedDiskController.go:380] Azure disk "capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" is not zoned
I0823 18:19:04.452682       1 pv_controller.go:1598] volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" for claim "azuredisk-8510/pvc-xbvw9" created
... skipping 455 lines ...
I0823 18:19:40.540849       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: claim azuredisk-8510/pvc-cfrnh not found
I0823 18:19:40.540999       1 pv_controller.go:1108] reclaimVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: policy is Delete
I0823 18:19:40.541096       1 pv_controller.go:1752] scheduleOperation[delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]]
I0823 18:19:40.541177       1 pv_controller.go:1763] operation "delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]" is already running, skipping
I0823 18:19:40.544360       1 pv_controller.go:1340] isVolumeReleased[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is released
I0823 18:19:40.544377       1 pv_controller.go:1404] doDeleteVolume [pvc-afce054e-d667-4123-99f2-c78522ce11ea]
I0823 18:19:40.572875       1 pv_controller.go:1259] deletion of volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:19:40.573103       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: set phase Failed
I0823 18:19:40.573208       1 pv_controller.go:858] updating PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: set phase Failed
I0823 18:19:40.579407       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" with version 3252
I0823 18:19:40.579438       1 pv_controller.go:879] volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" entered phase "Failed"
I0823 18:19:40.579448       1 pv_controller.go:901] volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
E0823 18:19:40.579509       1 goroutinemap.go:150] Operation for "delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]" failed. No retries permitted until 2021-08-23 18:19:41.079471269 +0000 UTC m=+1383.488504126 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:19:40.579840       1 event.go:291] "Event occurred" object="pvc-afce054e-d667-4123-99f2-c78522ce11ea" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted"
I0823 18:19:40.579979       1 pv_protection_controller.go:205] Got event on PV pvc-afce054e-d667-4123-99f2-c78522ce11ea
I0823 18:19:40.579999       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" with version 3252
I0823 18:19:40.580016       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase: Failed, bound to: "azuredisk-8510/pvc-cfrnh (uid: afce054e-d667-4123-99f2-c78522ce11ea)", boundByController: true
I0823 18:19:40.580037       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is bound to claim azuredisk-8510/pvc-cfrnh
I0823 18:19:40.580055       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: claim azuredisk-8510/pvc-cfrnh not found
I0823 18:19:40.580066       1 pv_controller.go:1108] reclaimVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: policy is Delete
I0823 18:19:40.580077       1 pv_controller.go:1752] scheduleOperation[delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]]
I0823 18:19:40.580084       1 pv_controller.go:1765] operation "delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]" postponed due to exponential backoff
I0823 18:19:47.684125       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
... skipping 38 lines ...
I0823 18:19:49.925755       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: volume is bound to claim azuredisk-8510/pvc-xjj2l
I0823 18:19:49.925956       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: claim azuredisk-8510/pvc-xjj2l found: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:19:49.925981       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: all is bound
I0823 18:19:49.925994       1 pv_controller.go:858] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: set phase Bound
I0823 18:19:49.926107       1 pv_controller.go:861] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: phase Bound already set
I0823 18:19:49.926201       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" with version 3252
I0823 18:19:49.926237       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase: Failed, bound to: "azuredisk-8510/pvc-cfrnh (uid: afce054e-d667-4123-99f2-c78522ce11ea)", boundByController: true
I0823 18:19:49.926363       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is bound to claim azuredisk-8510/pvc-cfrnh
I0823 18:19:49.926483       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: claim azuredisk-8510/pvc-cfrnh not found
I0823 18:19:49.926509       1 pv_controller.go:1108] reclaimVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: policy is Delete
I0823 18:19:49.925380       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8510/pvc-xjj2l" with version 3144
I0823 18:19:49.926621       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l]: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:19:49.926717       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l]: volume "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19" found: phase: Bound, bound to: "azuredisk-8510/pvc-xjj2l (uid: 626e6ba0-503e-42ad-8e7f-f8eee4972b19)", boundByController: true
... skipping 36 lines ...
I0823 18:19:49.929519       1 pv_controller.go:1039] volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" status after binding: phase: Bound, bound to: "azuredisk-8510/pvc-xbvw9 (uid: 1a516c1c-cad9-4955-a3ea-6b87bab4bd85)", boundByController: true
I0823 18:19:49.929548       1 pv_controller.go:1040] claim "azuredisk-8510/pvc-xbvw9" status after binding: phase: Bound, bound to: "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85", bindCompleted: true, boundByController: true
I0823 18:19:49.935129       1 pv_controller.go:1340] isVolumeReleased[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is released
I0823 18:19:49.935148       1 pv_controller.go:1404] doDeleteVolume [pvc-afce054e-d667-4123-99f2-c78522ce11ea]
I0823 18:19:49.941216       1 gc_controller.go:161] GC'ing orphaned
I0823 18:19:49.941244       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:19:49.959798       1 pv_controller.go:1259] deletion of volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:19:49.959855       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: set phase Failed
I0823 18:19:49.959868       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase Failed already set
E0823 18:19:49.960045       1 goroutinemap.go:150] Operation for "delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]" failed. No retries permitted until 2021-08-23 18:19:50.959879319 +0000 UTC m=+1393.368912176 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-hbpcn), could not be deleted
I0823 18:19:50.133634       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-hbpcn ReadyCondition updated. Updating timestamp.
I0823 18:19:51.446356       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:19:56.164998       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 4 items received
I0823 18:19:59.419356       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="242.902µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37778" resp=200
I0823 18:20:03.651150       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-hbpcn) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19) returned with <nil>
I0823 18:20:03.651463       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19) succeeded
... skipping 48 lines ...
I0823 18:20:04.927179       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: volume is bound to claim azuredisk-8510/pvc-xjj2l
I0823 18:20:04.927217       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: claim azuredisk-8510/pvc-xjj2l found: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:20:04.927295       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: all is bound
I0823 18:20:04.927308       1 pv_controller.go:858] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: set phase Bound
I0823 18:20:04.927317       1 pv_controller.go:861] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: phase Bound already set
I0823 18:20:04.927331       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" with version 3252
I0823 18:20:04.927433       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase: Failed, bound to: "azuredisk-8510/pvc-cfrnh (uid: afce054e-d667-4123-99f2-c78522ce11ea)", boundByController: true
I0823 18:20:04.927549       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is bound to claim azuredisk-8510/pvc-cfrnh
I0823 18:20:04.927573       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: claim azuredisk-8510/pvc-cfrnh not found
I0823 18:20:04.927653       1 pv_controller.go:1108] reclaimVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: policy is Delete
I0823 18:20:04.927735       1 pv_controller.go:1752] scheduleOperation[delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]]
I0823 18:20:04.927778       1 pv_controller.go:1231] deleteVolumeOperation [pvc-afce054e-d667-4123-99f2-c78522ce11ea] started
I0823 18:20:04.930649       1 pv_controller.go:1340] isVolumeReleased[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is released
I0823 18:20:04.930666       1 pv_controller.go:1404] doDeleteVolume [pvc-afce054e-d667-4123-99f2-c78522ce11ea]
I0823 18:20:04.930703       1 pv_controller.go:1259] deletion of volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) since it's in attaching or detaching state
I0823 18:20:04.930718       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: set phase Failed
I0823 18:20:04.930728       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase Failed already set
E0823 18:20:04.930755       1 goroutinemap.go:150] Operation for "delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]" failed. No retries permitted until 2021-08-23 18:20:06.930736997 +0000 UTC m=+1409.339769754 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea) since it's in attaching or detaching state
I0823 18:20:06.842082       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 5 items received
I0823 18:20:09.418931       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="93.2µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37872" resp=200
I0823 18:20:09.909590       1 controller.go:269] Triggering nodeSync
I0823 18:20:09.909641       1 controller.go:288] nodeSync has been triggered
I0823 18:20:09.909653       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0823 18:20:09.909664       1 controller.go:779] Finished updateLoadBalancerHosts
... skipping 17 lines ...
I0823 18:20:19.926694       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: claim azuredisk-8510/pvc-xjj2l found: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:20:19.926786       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: all is bound
I0823 18:20:19.926882       1 pv_controller.go:858] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: set phase Bound
I0823 18:20:19.926944       1 pv_controller.go:861] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: phase Bound already set
I0823 18:20:19.927027       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" with version 3252
I0823 18:20:19.926757       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8510/pvc-xjj2l" with version 3144
I0823 18:20:19.927145       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase: Failed, bound to: "azuredisk-8510/pvc-cfrnh (uid: afce054e-d667-4123-99f2-c78522ce11ea)", boundByController: true
I0823 18:20:19.927272       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is bound to claim azuredisk-8510/pvc-cfrnh
I0823 18:20:19.927349       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: claim azuredisk-8510/pvc-cfrnh not found
I0823 18:20:19.927430       1 pv_controller.go:1108] reclaimVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: policy is Delete
I0823 18:20:19.927513       1 pv_controller.go:1752] scheduleOperation[delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]]
I0823 18:20:19.927350       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l]: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:20:19.927570       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" with version 3156
... skipping 44 lines ...
I0823 18:20:25.210405       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-afce054e-d667-4123-99f2-c78522ce11ea
I0823 18:20:25.210446       1 pv_controller.go:1435] volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" deleted
I0823 18:20:25.210464       1 pv_controller.go:1283] deleteVolumeOperation [pvc-afce054e-d667-4123-99f2-c78522ce11ea]: success
I0823 18:20:25.219757       1 pv_protection_controller.go:205] Got event on PV pvc-afce054e-d667-4123-99f2-c78522ce11ea
I0823 18:20:25.219798       1 pv_protection_controller.go:125] Processing PV pvc-afce054e-d667-4123-99f2-c78522ce11ea
I0823 18:20:25.219899       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-afce054e-d667-4123-99f2-c78522ce11ea" with version 3319
I0823 18:20:25.219955       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: phase: Failed, bound to: "azuredisk-8510/pvc-cfrnh (uid: afce054e-d667-4123-99f2-c78522ce11ea)", boundByController: true
I0823 18:20:25.219985       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: volume is bound to claim azuredisk-8510/pvc-cfrnh
I0823 18:20:25.220004       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: claim azuredisk-8510/pvc-cfrnh not found
I0823 18:20:25.220013       1 pv_controller.go:1108] reclaimVolume[pvc-afce054e-d667-4123-99f2-c78522ce11ea]: policy is Delete
I0823 18:20:25.220044       1 pv_controller.go:1752] scheduleOperation[delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]]
I0823 18:20:25.220052       1 pv_controller.go:1763] operation "delete-pvc-afce054e-d667-4123-99f2-c78522ce11ea[e338a883-8c7b-4912-80d9-a7ea9e81a3be]" is already running, skipping
I0823 18:20:25.226030       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-afce054e-d667-4123-99f2-c78522ce11ea
... skipping 44 lines ...
I0823 18:20:26.319428       1 pv_controller.go:1108] reclaimVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: policy is Delete
I0823 18:20:26.319438       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]]
I0823 18:20:26.319442       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85] started
I0823 18:20:26.319444       1 pv_controller.go:1763] operation "delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]" is already running, skipping
I0823 18:20:26.321469       1 pv_controller.go:1340] isVolumeReleased[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: volume is released
I0823 18:20:26.321487       1 pv_controller.go:1404] doDeleteVolume [pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]
I0823 18:20:26.321523       1 pv_controller.go:1259] deletion of volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) since it's in attaching or detaching state
I0823 18:20:26.321537       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: set phase Failed
I0823 18:20:26.321546       1 pv_controller.go:858] updating PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: set phase Failed
I0823 18:20:26.324853       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" with version 3326
I0823 18:20:26.324897       1 pv_controller.go:879] volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" entered phase "Failed"
I0823 18:20:26.324908       1 pv_controller.go:901] volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) since it's in attaching or detaching state
E0823 18:20:26.325013       1 goroutinemap.go:150] Operation for "delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]" failed. No retries permitted until 2021-08-23 18:20:26.824930998 +0000 UTC m=+1429.233963855 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) since it's in attaching or detaching state
I0823 18:20:26.325443       1 event.go:291] "Event occurred" object="pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) since it's in attaching or detaching state"
I0823 18:20:26.325918       1 pv_protection_controller.go:205] Got event on PV pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85
I0823 18:20:26.326023       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" with version 3326
I0823 18:20:26.326106       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: phase: Failed, bound to: "azuredisk-8510/pvc-xbvw9 (uid: 1a516c1c-cad9-4955-a3ea-6b87bab4bd85)", boundByController: true
I0823 18:20:26.326275       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: volume is bound to claim azuredisk-8510/pvc-xbvw9
I0823 18:20:26.326364       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: claim azuredisk-8510/pvc-xbvw9 not found
I0823 18:20:26.326410       1 pv_controller.go:1108] reclaimVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: policy is Delete
I0823 18:20:26.326477       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]]
I0823 18:20:26.326515       1 pv_controller.go:1765] operation "delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]" postponed due to exponential backoff
I0823 18:20:29.418360       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="84.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38058" resp=200
... skipping 18 lines ...
I0823 18:20:34.927383       1 pv_controller.go:858] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: set phase Bound
I0823 18:20:34.927393       1 pv_controller.go:861] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: phase Bound already set
I0823 18:20:34.927396       1 pv_controller.go:861] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: phase Bound already set
I0823 18:20:34.927403       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l]: binding to "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19"
I0823 18:20:34.927411       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" with version 3326
I0823 18:20:34.927429       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l]: already bound to "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19"
I0823 18:20:34.927433       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: phase: Failed, bound to: "azuredisk-8510/pvc-xbvw9 (uid: 1a516c1c-cad9-4955-a3ea-6b87bab4bd85)", boundByController: true
I0823 18:20:34.927442       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l] status: set phase Bound
I0823 18:20:34.927455       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: volume is bound to claim azuredisk-8510/pvc-xbvw9
I0823 18:20:34.927468       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8510/pvc-xjj2l] status: phase Bound already set
I0823 18:20:34.927481       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: claim azuredisk-8510/pvc-xbvw9 not found
I0823 18:20:34.927483       1 pv_controller.go:1038] volume "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19" bound to claim "azuredisk-8510/pvc-xjj2l"
I0823 18:20:34.927497       1 pv_controller.go:1108] reclaimVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: policy is Delete
I0823 18:20:34.927509       1 pv_controller.go:1039] volume "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19" status after binding: phase: Bound, bound to: "azuredisk-8510/pvc-xjj2l (uid: 626e6ba0-503e-42ad-8e7f-f8eee4972b19)", boundByController: true
I0823 18:20:34.927517       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]]
I0823 18:20:34.927527       1 pv_controller.go:1040] claim "azuredisk-8510/pvc-xjj2l" status after binding: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:20:34.927546       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85] started
I0823 18:20:34.939975       1 pv_controller.go:1340] isVolumeReleased[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: volume is released
I0823 18:20:34.939996       1 pv_controller.go:1404] doDeleteVolume [pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]
I0823 18:20:34.940043       1 pv_controller.go:1259] deletion of volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) since it's in attaching or detaching state
I0823 18:20:34.940056       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: set phase Failed
I0823 18:20:34.940066       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: phase Failed already set
E0823 18:20:34.940103       1 goroutinemap.go:150] Operation for "delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]" failed. No retries permitted until 2021-08-23 18:20:35.940075822 +0000 UTC m=+1438.349108679 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) since it's in attaching or detaching state
I0823 18:20:36.568481       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 20 items received
I0823 18:20:39.419280       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="131.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38154" resp=200
I0823 18:20:39.748088       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-hbpcn) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) returned with <nil>
I0823 18:20:39.748136       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85) succeeded
I0823 18:20:39.748148       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85 was detached from node:capz-tj2yec-md-0-hbpcn
I0823 18:20:39.748310       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85") on node "capz-tj2yec-md-0-hbpcn" 
... skipping 8 lines ...
I0823 18:20:49.928121       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: volume is bound to claim azuredisk-8510/pvc-xjj2l
I0823 18:20:49.928138       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: claim azuredisk-8510/pvc-xjj2l found: phase: Bound, bound to: "pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19", bindCompleted: true, boundByController: true
I0823 18:20:49.928151       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: all is bound
I0823 18:20:49.928160       1 pv_controller.go:858] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: set phase Bound
I0823 18:20:49.928169       1 pv_controller.go:861] updating PersistentVolume[pvc-626e6ba0-503e-42ad-8e7f-f8eee4972b19]: phase Bound already set
I0823 18:20:49.928190       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" with version 3326
I0823 18:20:49.928212       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: phase: Failed, bound to: "azuredisk-8510/pvc-xbvw9 (uid: 1a516c1c-cad9-4955-a3ea-6b87bab4bd85)", boundByController: true
I0823 18:20:49.928235       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: volume is bound to claim azuredisk-8510/pvc-xbvw9
I0823 18:20:49.928252       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: claim azuredisk-8510/pvc-xbvw9 not found
I0823 18:20:49.928259       1 pv_controller.go:1108] reclaimVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: policy is Delete
I0823 18:20:49.928278       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]]
I0823 18:20:49.928308       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85] started
I0823 18:20:49.928397       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8510/pvc-xjj2l" with version 3144
... skipping 22 lines ...
I0823 18:20:55.152066       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85
I0823 18:20:55.152190       1 pv_controller.go:1435] volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" deleted
I0823 18:20:55.152272       1 pv_controller.go:1283] deleteVolumeOperation [pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: success
I0823 18:20:55.161152       1 pv_protection_controller.go:205] Got event on PV pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85
I0823 18:20:55.161182       1 pv_protection_controller.go:125] Processing PV pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85
I0823 18:20:55.161478       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" with version 3369
I0823 18:20:55.161514       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: phase: Failed, bound to: "azuredisk-8510/pvc-xbvw9 (uid: 1a516c1c-cad9-4955-a3ea-6b87bab4bd85)", boundByController: true
I0823 18:20:55.161541       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: volume is bound to claim azuredisk-8510/pvc-xbvw9
I0823 18:20:55.161561       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: claim azuredisk-8510/pvc-xbvw9 not found
I0823 18:20:55.161569       1 pv_controller.go:1108] reclaimVolume[pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85]: policy is Delete
I0823 18:20:55.161584       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]]
I0823 18:20:55.161595       1 pv_controller.go:1763] operation "delete-pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85[ed7e91cd-3f13-42b3-9b49-7537999c2d8b]" is already running, skipping
I0823 18:20:55.167175       1 pv_controller_base.go:235] volume "pvc-1a516c1c-cad9-4955-a3ea-6b87bab4bd85" deleted
... skipping 497 lines ...
I0823 18:21:46.552904       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: claim azuredisk-5561/pvc-fkfqn not found
I0823 18:21:46.552958       1 pv_controller.go:1108] reclaimVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: policy is Delete
I0823 18:21:46.552986       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]]
I0823 18:21:46.553008       1 pv_controller.go:1763] operation "delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]" is already running, skipping
I0823 18:21:46.555164       1 pv_controller.go:1340] isVolumeReleased[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is released
I0823 18:21:46.555180       1 pv_controller.go:1404] doDeleteVolume [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]
I0823 18:21:46.581779       1 pv_controller.go:1259] deletion of volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:21:46.581807       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: set phase Failed
I0823 18:21:46.581816       1 pv_controller.go:858] updating PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: set phase Failed
I0823 18:21:46.585376       1 pv_protection_controller.go:205] Got event on PV pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8
I0823 18:21:46.585416       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3524
I0823 18:21:46.585445       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase: Failed, bound to: "azuredisk-5561/pvc-fkfqn (uid: 4ad2b637-db6a-41bd-98b1-93b3dc6518f8)", boundByController: true
I0823 18:21:46.585471       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is bound to claim azuredisk-5561/pvc-fkfqn
I0823 18:21:46.585489       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: claim azuredisk-5561/pvc-fkfqn not found
I0823 18:21:46.585497       1 pv_controller.go:1108] reclaimVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: policy is Delete
I0823 18:21:46.585511       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]]
I0823 18:21:46.585519       1 pv_controller.go:1763] operation "delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]" is already running, skipping
I0823 18:21:46.586386       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3524
I0823 18:21:46.586430       1 pv_controller.go:879] volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" entered phase "Failed"
I0823 18:21:46.586444       1 pv_controller.go:901] volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
E0823 18:21:46.586496       1 goroutinemap.go:150] Operation for "delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]" failed. No retries permitted until 2021-08-23 18:21:47.086479272 +0000 UTC m=+1509.495512029 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:21:46.586561       1 event.go:291] "Event occurred" object="pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted"
I0823 18:21:47.549956       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 7 items received
I0823 18:21:48.950720       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-md-0-792q5"
I0823 18:21:48.950854       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954 to the node "capz-tj2yec-md-0-792q5" mounted false
I0823 18:21:48.950870       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 to the node "capz-tj2yec-md-0-792q5" mounted false
I0823 18:21:48.971091       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8\"}]}}" for node "capz-tj2yec-md-0-792q5" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 1}]
... skipping 31 lines ...
I0823 18:21:49.932469       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:21:49.932392       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: claim azuredisk-5561/pvc-9gsp7 found: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:21:49.932648       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: all is bound
I0823 18:21:49.932682       1 pv_controller.go:858] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: set phase Bound
I0823 18:21:49.932719       1 pv_controller.go:861] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: phase Bound already set
I0823 18:21:49.932775       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3524
I0823 18:21:49.932843       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase: Failed, bound to: "azuredisk-5561/pvc-fkfqn (uid: 4ad2b637-db6a-41bd-98b1-93b3dc6518f8)", boundByController: true
I0823 18:21:49.932909       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is bound to claim azuredisk-5561/pvc-fkfqn
I0823 18:21:49.932966       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: claim azuredisk-5561/pvc-fkfqn not found
I0823 18:21:49.933000       1 pv_controller.go:1108] reclaimVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: policy is Delete
I0823 18:21:49.933054       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]]
I0823 18:21:49.933124       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8] started
I0823 18:21:49.933199       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: volume "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954" found: phase: Bound, bound to: "azuredisk-5561/pvc-9gsp7 (uid: d4e7bf0c-4099-48d1-b0b2-c5c116da3954)", boundByController: true
... skipping 11 lines ...
I0823 18:21:49.934198       1 pv_controller.go:1039] volume "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954" status after binding: phase: Bound, bound to: "azuredisk-5561/pvc-9gsp7 (uid: d4e7bf0c-4099-48d1-b0b2-c5c116da3954)", boundByController: true
I0823 18:21:49.934316       1 pv_controller.go:1040] claim "azuredisk-5561/pvc-9gsp7" status after binding: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:21:49.936687       1 pv_controller.go:1340] isVolumeReleased[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is released
I0823 18:21:49.936952       1 pv_controller.go:1404] doDeleteVolume [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]
I0823 18:21:49.945704       1 gc_controller.go:161] GC'ing orphaned
I0823 18:21:49.945729       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:21:49.968326       1 pv_controller.go:1259] deletion of volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:21:49.968352       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: set phase Failed
I0823 18:21:49.968362       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase Failed already set
E0823 18:21:49.968527       1 goroutinemap.go:150] Operation for "delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]" failed. No retries permitted until 2021-08-23 18:21:50.968372226 +0000 UTC m=+1513.377404983 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:21:50.072565       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0823 18:21:50.151772       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-md-0-792q5 ReadyCondition updated. Updating timestamp.
I0823 18:21:51.512913       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:21:59.418718       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="78.5µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38932" resp=200
I0823 18:22:02.865788       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 17 items received
I0823 18:22:04.539320       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954) returned with <nil>
... skipping 16 lines ...
I0823 18:22:04.933715       1 pv_controller.go:910] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: binding to "azuredisk-5561/pvc-9gsp7"
I0823 18:22:04.933787       1 pv_controller.go:922] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: already bound to "azuredisk-5561/pvc-9gsp7"
I0823 18:22:04.933629       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: all is bound
I0823 18:22:04.933953       1 pv_controller.go:858] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: set phase Bound
I0823 18:22:04.934045       1 pv_controller.go:861] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: phase Bound already set
I0823 18:22:04.934144       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3524
I0823 18:22:04.934271       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase: Failed, bound to: "azuredisk-5561/pvc-fkfqn (uid: 4ad2b637-db6a-41bd-98b1-93b3dc6518f8)", boundByController: true
I0823 18:22:04.934305       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is bound to claim azuredisk-5561/pvc-fkfqn
I0823 18:22:04.933839       1 pv_controller.go:858] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: set phase Bound
I0823 18:22:04.934355       1 pv_controller.go:861] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: phase Bound already set
I0823 18:22:04.934367       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: binding to "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954"
I0823 18:22:04.934397       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: already bound to "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954"
I0823 18:22:04.934458       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7] status: set phase Bound
... skipping 4 lines ...
I0823 18:22:04.934727       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: claim azuredisk-5561/pvc-fkfqn not found
I0823 18:22:04.934745       1 pv_controller.go:1108] reclaimVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: policy is Delete
I0823 18:22:04.934765       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]]
I0823 18:22:04.934904       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8] started
I0823 18:22:04.942919       1 pv_controller.go:1340] isVolumeReleased[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is released
I0823 18:22:04.942940       1 pv_controller.go:1404] doDeleteVolume [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]
I0823 18:22:04.943097       1 pv_controller.go:1259] deletion of volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) since it's in attaching or detaching state
I0823 18:22:04.943118       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: set phase Failed
I0823 18:22:04.943129       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase Failed already set
E0823 18:22:04.943263       1 goroutinemap.go:150] Operation for "delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]" failed. No retries permitted until 2021-08-23 18:22:06.943234751 +0000 UTC m=+1529.352267508 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) since it's in attaching or detaching state
I0823 18:22:09.419646       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="71.401µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39022" resp=200
I0823 18:22:09.946476       1 gc_controller.go:161] GC'ing orphaned
I0823 18:22:09.946507       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:22:10.259657       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 10 items received
I0823 18:22:10.879903       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CertificateSigningRequest total 5 items received
I0823 18:22:13.682551       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
... skipping 6 lines ...
I0823 18:22:19.933999       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: volume is bound to claim azuredisk-5561/pvc-9gsp7
I0823 18:22:19.934070       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: claim azuredisk-5561/pvc-9gsp7 found: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:22:19.934125       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: all is bound
I0823 18:22:19.934174       1 pv_controller.go:858] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: set phase Bound
I0823 18:22:19.934210       1 pv_controller.go:861] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: phase Bound already set
I0823 18:22:19.934266       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3524
I0823 18:22:19.934339       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase: Failed, bound to: "azuredisk-5561/pvc-fkfqn (uid: 4ad2b637-db6a-41bd-98b1-93b3dc6518f8)", boundByController: true
I0823 18:22:19.934413       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is bound to claim azuredisk-5561/pvc-fkfqn
I0823 18:22:19.934475       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: claim azuredisk-5561/pvc-fkfqn not found
I0823 18:22:19.934523       1 pv_controller.go:1108] reclaimVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: policy is Delete
I0823 18:22:19.934564       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]]
I0823 18:22:19.934640       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8] started
I0823 18:22:19.934655       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5561/pvc-9gsp7" with version 3421
... skipping 11 lines ...
I0823 18:22:19.936369       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7] status: phase Bound already set
I0823 18:22:19.936509       1 pv_controller.go:1038] volume "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954" bound to claim "azuredisk-5561/pvc-9gsp7"
I0823 18:22:19.936651       1 pv_controller.go:1039] volume "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954" status after binding: phase: Bound, bound to: "azuredisk-5561/pvc-9gsp7 (uid: d4e7bf0c-4099-48d1-b0b2-c5c116da3954)", boundByController: true
I0823 18:22:19.936808       1 pv_controller.go:1040] claim "azuredisk-5561/pvc-9gsp7" status after binding: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:22:19.940847       1 pv_controller.go:1340] isVolumeReleased[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is released
I0823 18:22:19.940864       1 pv_controller.go:1404] doDeleteVolume [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]
I0823 18:22:19.940901       1 pv_controller.go:1259] deletion of volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) since it's in attaching or detaching state
I0823 18:22:19.940915       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: set phase Failed
I0823 18:22:19.940925       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase Failed already set
E0823 18:22:19.940951       1 goroutinemap.go:150] Operation for "delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]" failed. No retries permitted until 2021-08-23 18:22:23.940933551 +0000 UTC m=+1546.349966408 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) since it's in attaching or detaching state
I0823 18:22:19.990490       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) returned with <nil>
I0823 18:22:19.990534       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8) succeeded
I0823 18:22:19.990544       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8 was detached from node:capz-tj2yec-md-0-792q5
I0823 18:22:19.990719       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8") on node "capz-tj2yec-md-0-792q5" 
I0823 18:22:21.536244       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:22:27.866659       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 10 items received
... skipping 8 lines ...
I0823 18:22:34.934249       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: volume is bound to claim azuredisk-5561/pvc-9gsp7
I0823 18:22:34.934270       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: claim azuredisk-5561/pvc-9gsp7 found: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:22:34.934284       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: all is bound
I0823 18:22:34.934319       1 pv_controller.go:858] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: set phase Bound
I0823 18:22:34.934332       1 pv_controller.go:861] updating PersistentVolume[pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954]: phase Bound already set
I0823 18:22:34.934358       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3524
I0823 18:22:34.934416       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase: Failed, bound to: "azuredisk-5561/pvc-fkfqn (uid: 4ad2b637-db6a-41bd-98b1-93b3dc6518f8)", boundByController: true
I0823 18:22:34.934440       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is bound to claim azuredisk-5561/pvc-fkfqn
I0823 18:22:34.934176       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5561/pvc-9gsp7" with version 3421
I0823 18:22:34.934610       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: phase: Bound, bound to: "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954", bindCompleted: true, boundByController: true
I0823 18:22:34.934745       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: volume "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954" found: phase: Bound, bound to: "azuredisk-5561/pvc-9gsp7 (uid: d4e7bf0c-4099-48d1-b0b2-c5c116da3954)", boundByController: true
I0823 18:22:34.934836       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5561/pvc-9gsp7]: claim is already correctly bound
I0823 18:22:34.934876       1 pv_controller.go:1012] binding volume "pvc-d4e7bf0c-4099-48d1-b0b2-c5c116da3954" to claim "azuredisk-5561/pvc-9gsp7"
... skipping 18 lines ...
I0823 18:22:40.142653       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8
I0823 18:22:40.142692       1 pv_controller.go:1435] volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" deleted
I0823 18:22:40.142704       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: success
I0823 18:22:40.150668       1 pv_protection_controller.go:205] Got event on PV pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8
I0823 18:22:40.150776       1 pv_protection_controller.go:125] Processing PV pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8
I0823 18:22:40.150688       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" with version 3604
I0823 18:22:40.151099       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: phase: Failed, bound to: "azuredisk-5561/pvc-fkfqn (uid: 4ad2b637-db6a-41bd-98b1-93b3dc6518f8)", boundByController: true
I0823 18:22:40.151167       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: volume is bound to claim azuredisk-5561/pvc-fkfqn
I0823 18:22:40.151190       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: claim azuredisk-5561/pvc-fkfqn not found
I0823 18:22:40.151232       1 pv_controller.go:1108] reclaimVolume[pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8]: policy is Delete
I0823 18:22:40.151252       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8[3b796b06-abc7-49de-adcb-0c8fd718fe26]]
I0823 18:22:40.151280       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8] started
I0823 18:22:40.160452       1 pv_controller.go:1243] Volume "pvc-4ad2b637-db6a-41bd-98b1-93b3dc6518f8" is already being deleted
... skipping 337 lines ...
I0823 18:22:57.148694       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-953/pvc-f8ssq] status: phase Bound already set
I0823 18:22:57.148721       1 pv_controller.go:1038] volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" bound to claim "azuredisk-953/pvc-f8ssq"
I0823 18:22:57.148741       1 pv_controller.go:1039] volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" status after binding: phase: Bound, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:22:57.148805       1 pv_controller.go:1040] claim "azuredisk-953/pvc-f8ssq" status after binding: phase: Bound, bound to: "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75", bindCompleted: true, boundByController: true
I0823 18:22:57.258029       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5561
I0823 18:22:57.297387       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5561, name default-token-hlgc8, uid 9c36edca-8222-4f38-aa37-5fa4ca59f038, event type delete
E0823 18:22:57.311664       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5561/default: secrets "default-token-f7bzn" is forbidden: unable to create new content in namespace azuredisk-5561 because it is being terminated
I0823 18:22:57.330852       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5561, name kube-root-ca.crt, uid 2348de2a-4416-456e-a403-8f1fff0d3e93, event type delete
I0823 18:22:57.337944       1 publisher.go:186] Finished syncing namespace "azuredisk-5561" (7.497554ms)
I0823 18:22:57.361358       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5561/default), service account deleted, removing tokens
I0823 18:22:57.361418       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5561, name default, uid e63832e2-2730-4117-9e9d-af7f29286e70, event type delete
I0823 18:22:57.361458       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5561" (2µs)
I0823 18:22:57.386713       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-qgqzs.169e028199945bdf, uid eb5b315f-f8df-41e1-a4dc-e0126e96ac21, event type delete
... skipping 38 lines ...
I0823 18:22:58.227732       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4376, estimate: 0, errors: <nil>
I0823 18:22:58.243166       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4376" (180.222402ms)
I0823 18:22:58.891499       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1577
I0823 18:22:58.919917       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1577, name kube-root-ca.crt, uid 43c793bb-c44b-4f82-a2e4-12332f3a0fd1, event type delete
I0823 18:22:58.922906       1 publisher.go:186] Finished syncing namespace "azuredisk-1577" (3.314424ms)
I0823 18:22:59.041322       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1577, name default-token-6wwq4, uid 7349421c-48c1-4747-a315-e0309f564800, event type delete
E0823 18:22:59.079432       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1577/default: secrets "default-token-vn8dt" is forbidden: unable to create new content in namespace azuredisk-1577 because it is being terminated
I0823 18:22:59.150873       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1577/default), service account deleted, removing tokens
I0823 18:22:59.151057       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1577, name default, uid 12ed8a78-d2a3-4720-bbee-668e5f95bbf6, event type delete
I0823 18:22:59.151252       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1577" (2.9µs)
I0823 18:22:59.185680       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1577" (3.2µs)
I0823 18:22:59.185967       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1577, estimate: 0, errors: <nil>
I0823 18:22:59.195348       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1577" (307.786724ms)
... skipping 361 lines ...
I0823 18:23:35.518246       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:23:35.518347       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:23:35.518435       1 pv_controller.go:1763] operation "delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]" is already running, skipping
I0823 18:23:35.517941       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75] started
I0823 18:23:35.520796       1 pv_controller.go:1340] isVolumeReleased[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is released
I0823 18:23:35.520814       1 pv_controller.go:1404] doDeleteVolume [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]
I0823 18:23:35.582058       1 pv_controller.go:1259] deletion of volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:23:35.582090       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: set phase Failed
I0823 18:23:35.582101       1 pv_controller.go:858] updating PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: set phase Failed
I0823 18:23:35.586748       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3797
I0823 18:23:35.586788       1 pv_controller.go:879] volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" entered phase "Failed"
I0823 18:23:35.586801       1 pv_controller.go:901] volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
E0823 18:23:35.586846       1 goroutinemap.go:150] Operation for "delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]" failed. No retries permitted until 2021-08-23 18:23:36.086824309 +0000 UTC m=+1618.495857166 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:23:35.586891       1 event.go:291] "Event occurred" object="pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted"
I0823 18:23:35.587323       1 pv_protection_controller.go:205] Got event on PV pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75
I0823 18:23:35.587348       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3797
I0823 18:23:35.587372       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase: Failed, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:23:35.587394       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is bound to claim azuredisk-953/pvc-f8ssq
I0823 18:23:35.587408       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: claim azuredisk-953/pvc-f8ssq not found
I0823 18:23:35.587415       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:23:35.587429       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:23:35.587437       1 pv_controller.go:1765] operation "delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]" postponed due to exponential backoff
I0823 18:23:39.418914       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="83.301µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39906" resp=200
... skipping 45 lines ...
I0823 18:23:49.938362       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: volume is bound to claim azuredisk-953/pvc-xx2pq
I0823 18:23:49.938378       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: claim azuredisk-953/pvc-xx2pq found: phase: Bound, bound to: "pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a", bindCompleted: true, boundByController: true
I0823 18:23:49.938391       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: all is bound
I0823 18:23:49.938399       1 pv_controller.go:858] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: set phase Bound
I0823 18:23:49.938408       1 pv_controller.go:861] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: phase Bound already set
I0823 18:23:49.938419       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3797
I0823 18:23:49.938451       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase: Failed, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:23:49.938472       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is bound to claim azuredisk-953/pvc-f8ssq
I0823 18:23:49.938491       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: claim azuredisk-953/pvc-f8ssq not found
I0823 18:23:49.938498       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:23:49.938515       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:23:49.938543       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75] started
I0823 18:23:49.938756       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-953/pvc-xx2pq" with version 3681
... skipping 29 lines ...
I0823 18:23:49.939256       1 pv_controller.go:1039] volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" status after binding: phase: Bound, bound to: "azuredisk-953/pvc-d99dz (uid: 3eb3107c-5d92-4816-bf77-2c713eaead7a)", boundByController: true
I0823 18:23:49.939278       1 pv_controller.go:1040] claim "azuredisk-953/pvc-d99dz" status after binding: phase: Bound, bound to: "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a", bindCompleted: true, boundByController: true
I0823 18:23:49.941485       1 pv_controller.go:1340] isVolumeReleased[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is released
I0823 18:23:49.941632       1 pv_controller.go:1404] doDeleteVolume [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]
I0823 18:23:49.949865       1 gc_controller.go:161] GC'ing orphaned
I0823 18:23:49.949891       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:23:49.965281       1 pv_controller.go:1259] deletion of volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:23:49.965489       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: set phase Failed
I0823 18:23:49.965511       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase Failed already set
E0823 18:23:49.965550       1 goroutinemap.go:150] Operation for "delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]" failed. No retries permitted until 2021-08-23 18:23:50.965525386 +0000 UTC m=+1633.374558143 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:23:51.530308       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 8 items received
I0823 18:23:51.594202       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:23:51.855829       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 7 items received
I0823 18:23:55.318291       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a) returned with <nil>
I0823 18:23:55.318351       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a) succeeded
I0823 18:23:55.318365       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a was detached from node:capz-tj2yec-md-0-792q5
... skipping 18 lines ...
I0823 18:24:04.939608       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: volume is bound to claim azuredisk-953/pvc-xx2pq
I0823 18:24:04.939627       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: claim azuredisk-953/pvc-xx2pq found: phase: Bound, bound to: "pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a", bindCompleted: true, boundByController: true
I0823 18:24:04.939641       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: all is bound
I0823 18:24:04.939650       1 pv_controller.go:858] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: set phase Bound
I0823 18:24:04.939675       1 pv_controller.go:861] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: phase Bound already set
I0823 18:24:04.939693       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3797
I0823 18:24:04.939728       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase: Failed, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:24:04.939749       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is bound to claim azuredisk-953/pvc-f8ssq
I0823 18:24:04.939766       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: claim azuredisk-953/pvc-f8ssq not found
I0823 18:24:04.939776       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:24:04.939820       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:24:04.939854       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75] started
I0823 18:24:04.939377       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-953/pvc-xx2pq]: phase: Bound, bound to: "pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a", bindCompleted: true, boundByController: true
... skipping 26 lines ...
I0823 18:24:04.942697       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-953/pvc-d99dz] status: phase Bound already set
I0823 18:24:04.942772       1 pv_controller.go:1038] volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" bound to claim "azuredisk-953/pvc-d99dz"
I0823 18:24:04.942855       1 pv_controller.go:1039] volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" status after binding: phase: Bound, bound to: "azuredisk-953/pvc-d99dz (uid: 3eb3107c-5d92-4816-bf77-2c713eaead7a)", boundByController: true
I0823 18:24:04.942937       1 pv_controller.go:1040] claim "azuredisk-953/pvc-d99dz" status after binding: phase: Bound, bound to: "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a", bindCompleted: true, boundByController: true
I0823 18:24:04.952266       1 pv_controller.go:1340] isVolumeReleased[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is released
I0823 18:24:04.952283       1 pv_controller.go:1404] doDeleteVolume [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]
I0823 18:24:04.975905       1 pv_controller.go:1259] deletion of volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:24:04.975930       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: set phase Failed
I0823 18:24:04.975941       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase Failed already set
E0823 18:24:04.976000       1 goroutinemap.go:150] Operation for "delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]" failed. No retries permitted until 2021-08-23 18:24:06.975977661 +0000 UTC m=+1649.385010518 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/virtualMachines/capz-tj2yec-md-0-792q5), could not be deleted
I0823 18:24:09.418633       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="92.3µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40198" resp=200
I0823 18:24:09.950160       1 gc_controller.go:161] GC'ing orphaned
I0823 18:24:09.950200       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0823 18:24:10.829280       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a) returned with <nil>
I0823 18:24:10.829330       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a) succeeded
I0823 18:24:10.829342       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a was detached from node:capz-tj2yec-md-0-792q5
... skipping 10 lines ...
I0823 18:24:19.940303       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: volume is bound to claim azuredisk-953/pvc-xx2pq
I0823 18:24:19.940316       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: claim azuredisk-953/pvc-xx2pq found: phase: Bound, bound to: "pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a", bindCompleted: true, boundByController: true
I0823 18:24:19.940328       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: all is bound
I0823 18:24:19.940335       1 pv_controller.go:858] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: set phase Bound
I0823 18:24:19.940343       1 pv_controller.go:861] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: phase Bound already set
I0823 18:24:19.940353       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3797
I0823 18:24:19.940364       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase: Failed, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:24:19.940394       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is bound to claim azuredisk-953/pvc-f8ssq
I0823 18:24:19.940407       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: claim azuredisk-953/pvc-f8ssq not found
I0823 18:24:19.940417       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:24:19.940433       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:24:19.940448       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" with version 3672
I0823 18:24:19.940462       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a]: phase: Bound, bound to: "azuredisk-953/pvc-d99dz (uid: 3eb3107c-5d92-4816-bf77-2c713eaead7a)", boundByController: true
... skipping 34 lines ...
I0823 18:24:19.944310       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-953/pvc-d99dz] status: phase Bound already set
I0823 18:24:19.944385       1 pv_controller.go:1038] volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" bound to claim "azuredisk-953/pvc-d99dz"
I0823 18:24:19.944465       1 pv_controller.go:1039] volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" status after binding: phase: Bound, bound to: "azuredisk-953/pvc-d99dz (uid: 3eb3107c-5d92-4816-bf77-2c713eaead7a)", boundByController: true
I0823 18:24:19.944558       1 pv_controller.go:1040] claim "azuredisk-953/pvc-d99dz" status after binding: phase: Bound, bound to: "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a", bindCompleted: true, boundByController: true
I0823 18:24:20.008121       1 pv_controller.go:1340] isVolumeReleased[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is released
I0823 18:24:20.008143       1 pv_controller.go:1404] doDeleteVolume [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]
I0823 18:24:20.008183       1 pv_controller.go:1259] deletion of volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) since it's in attaching or detaching state
I0823 18:24:20.008198       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: set phase Failed
I0823 18:24:20.008207       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase Failed already set
E0823 18:24:20.008237       1 goroutinemap.go:150] Operation for "delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]" failed. No retries permitted until 2021-08-23 18:24:24.008216599 +0000 UTC m=+1666.417249456 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) since it's in attaching or detaching state
I0823 18:24:20.013705       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-tj2yec-control-plane-r8mns"
I0823 18:24:20.178021       1 node_lifecycle_controller.go:1047] Node capz-tj2yec-control-plane-r8mns ReadyCondition updated. Updating timestamp.
I0823 18:24:21.616309       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0823 18:24:23.455491       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 25 items received
I0823 18:24:26.373022       1 azure_controller_standard.go:184] azureDisk - update(capz-tj2yec): vm(capz-tj2yec-md-0-792q5) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) returned with <nil>
I0823 18:24:26.373071       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75) succeeded
... skipping 38 lines ...
I0823 18:24:34.941761       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-953/pvc-d99dz]: binding to "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a"
I0823 18:24:34.941853       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-953/pvc-d99dz]: already bound to "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a"
I0823 18:24:34.941865       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-953/pvc-d99dz] status: set phase Bound
I0823 18:24:34.941567       1 pv_controller.go:858] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: set phase Bound
I0823 18:24:34.941931       1 pv_controller.go:861] updating PersistentVolume[pvc-fe08ae8b-e93c-4c23-870e-8aeea24a739a]: phase Bound already set
I0823 18:24:34.941955       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3797
I0823 18:24:34.941981       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase: Failed, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:24:34.942028       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is bound to claim azuredisk-953/pvc-f8ssq
I0823 18:24:34.942050       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: claim azuredisk-953/pvc-f8ssq not found
I0823 18:24:34.942058       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:24:34.942097       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:24:34.942116       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a" with version 3672
I0823 18:24:34.942137       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3eb3107c-5d92-4816-bf77-2c713eaead7a]: phase: Bound, bound to: "azuredisk-953/pvc-d99dz (uid: 3eb3107c-5d92-4816-bf77-2c713eaead7a)", boundByController: true
... skipping 14 lines ...
I0823 18:24:40.161367       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-tj2yec/providers/Microsoft.Compute/disks/capz-tj2yec-dynamic-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75
I0823 18:24:40.161404       1 pv_controller.go:1435] volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" deleted
I0823 18:24:40.161449       1 pv_controller.go:1283] deleteVolumeOperation [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: success
I0823 18:24:40.169337       1 pv_protection_controller.go:205] Got event on PV pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75
I0823 18:24:40.169369       1 pv_protection_controller.go:125] Processing PV pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75
I0823 18:24:40.169715       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" with version 3892
I0823 18:24:40.169766       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: phase: Failed, bound to: "azuredisk-953/pvc-f8ssq (uid: 5292244a-ea70-40b2-bbf3-e23562ab7c75)", boundByController: true
I0823 18:24:40.169797       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: volume is bound to claim azuredisk-953/pvc-f8ssq
I0823 18:24:40.169816       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: claim azuredisk-953/pvc-f8ssq not found
I0823 18:24:40.169823       1 pv_controller.go:1108] reclaimVolume[pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75]: policy is Delete
I0823 18:24:40.169838       1 pv_controller.go:1752] scheduleOperation[delete-pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75[b0850dcb-851d-4bcb-b1f2-b4a9cb7f70cb]]
I0823 18:24:40.169861       1 pv_controller.go:1231] deleteVolumeOperation [pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75] started
I0823 18:24:40.173644       1 pv_controller.go:1243] Volume "pvc-5292244a-ea70-40b2-bbf3-e23562ab7c75" is already being deleted
... skipping 315 lines ...
I0823 18:25:07.639326       1 publisher.go:186] Finished syncing namespace "azuredisk-3033" (3.355824ms)
I0823 18:25:07.657746       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3033, estimate: 0, errors: <nil>
I0823 18:25:07.657873       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3033" (2.4µs)
I0823 18:25:07.668410       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3033" (153.588703ms)
I0823 18:25:08.299892       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9336
I0823 18:25:08.323979       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9336, name default-token-lt6bt, uid bcaa0fd9-2dd8-4539-8244-bbfc654221e0, event type delete
E0823 18:25:08.351652       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9336/default: secrets "default-token-5t5mc" is forbidden: unable to create new content in namespace azuredisk-9336 because it is being terminated
I0823 18:25:08.408066       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9336, name kube-root-ca.crt, uid 36d46bae-709b-4e27-a1fa-6a64a2eed5ea, event type delete
I0823 18:25:08.412815       1 publisher.go:186] Finished syncing namespace "azuredisk-9336" (5.074136ms)
I0823 18:25:08.429655       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9336/default), service account deleted, removing tokens
I0823 18:25:08.429700       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9336, name default, uid bb684a28-bb9a-4e0b-a931-b4ebe1c614d4, event type delete
I0823 18:25:08.429743       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9336" (1.9µs)
I0823 18:25:08.451749       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9336, estimate: 0, errors: <nil>
... skipping 487 lines ...
I0823 18:26:36.341616       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-552, name azuredisk-volume-tester-89vqh.169e02c36c973a3f, uid 766d6258-735a-4ae6-8f09-fc3ffabc53ad, event type delete
I0823 18:26:36.345383       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-552, name pvc-j8rsx.169e02b7e474cf16, uid 65b2f770-c807-4fd6-b7eb-1309a4a9446a, event type delete
I0823 18:26:36.349891       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-552, name pvc-j8rsx.169e02b87550f1fc, uid ea8cc617-caf6-4ce7-b986-713472761e76, event type delete
I0823 18:26:36.375840       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-552, name kube-root-ca.crt, uid f801be15-ad5f-47f5-9f9e-43fc7052be70, event type delete
I0823 18:26:36.379336       1 publisher.go:186] Finished syncing namespace "azuredisk-552" (3.758331ms)
I0823 18:26:36.387412       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-552, name default-token-l5wr6, uid 2b3f88fc-b370-4721-b67a-2a104d339d4d, event type delete
E0823 18:26:36.402103       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-552/default: secrets "default-token-8lzhm" is forbidden: unable to create new content in namespace azuredisk-552 because it is being terminated
I0823 18:26:36.439169       1 tokens_controller.go:252] syncServiceAccount(azuredisk-552/default), service account deleted, removing tokens
I0823 18:26:36.439249       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-552, name default, uid c29c154b-7fe8-4b54-9223-35019b0e0eb6, event type delete
I0823 18:26:36.439288       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-552" (2.3µs)
I0823 18:26:36.451359       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-552, estimate: 0, errors: <nil>
I0823 18:26:36.451696       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-552" (2.6µs)
I0823 18:26:36.463957       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-552" (219.576118ms)
... skipping 380 lines ...

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [single-az] [It] should create multiple PV objects, bind to PVCs and attach all to different pods on the same node [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:734

Ran 12 of 53 Specs in 1539.200 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 41 Skipped
You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce (a small number of) breaking changes.
To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md
To comment, chime in at https://github.com/onsi/ginkgo/issues/711

... skipping 2 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.4

--- FAIL: TestE2E (1539.22s)
FAIL
FAIL	sigs.k8s.io/azuredisk-csi-driver/test/e2e	1539.275s
FAIL
make: *** [Makefile:254: e2e-test] Error 1
================ DUMPING LOGS FOR MANAGEMENT CLUSTER ================
Exported logs for cluster "capz" to:
/logs/artifacts/management-cluster
================ DUMPING LOGS FOR WORKLOAD CLUSTER ================
Deploying log-dump-daemonset
daemonset.apps/log-dump-node created
... skipping 22 lines ...