This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-08-26 17:49
Elapsed56m51s
Revisionmain

Test Failures


AzureDisk CSI Driver End-to-End Tests Dynamic Provisioning [single-az] should detach disk after pod deleted [disk.csi.azure.com] [Windows] 5m5s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureDisk\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\s\[single\-az\]\sshould\sdetach\sdisk\safter\spod\sdeleted\s\[disk\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:790
Unexpected error:
    <*errors.errorString | 0xc000200430>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:735
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 41 Skipped Tests

Error lines from build-log.txt

... skipping 792 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:158
STEP: Creating a kubernetes client
Aug 26 18:08:01.705: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
I0826 18:08:02.284135   31821 azuredisk_driver.go:56] Using azure disk driver: kubernetes.io/azure-disk
... skipping 2 lines ...

S [SKIPPING] [0.815 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:67
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:158

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
... skipping 81 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 26 18:08:07.402: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-p965f" in namespace "azuredisk-5356" to be "Succeeded or Failed"
Aug 26 18:08:07.519: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 117.263817ms
Aug 26 18:08:09.635: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233632834s
Aug 26 18:08:11.753: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35151673s
Aug 26 18:08:13.870: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46796184s
Aug 26 18:08:15.987: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.58478122s
Aug 26 18:08:18.105: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.702795329s
... skipping 2 lines ...
Aug 26 18:08:24.456: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.054424111s
Aug 26 18:08:26.575: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.172944721s
Aug 26 18:08:28.691: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.288939281s
Aug 26 18:08:30.808: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.406509225s
Aug 26 18:08:32.925: INFO: Pod "azuredisk-volume-tester-p965f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 25.523565144s
STEP: Saw pod success
Aug 26 18:08:32.925: INFO: Pod "azuredisk-volume-tester-p965f" satisfied condition "Succeeded or Failed"
Aug 26 18:08:32.925: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-p965f"
Aug 26 18:08:33.055: INFO: Pod azuredisk-volume-tester-p965f has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-p965f in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:08:33.427: INFO: deleting PVC "azuredisk-5356"/"pvc-rkfpg"
Aug 26 18:08:33.427: INFO: Deleting PersistentVolumeClaim "pvc-rkfpg"
STEP: waiting for claim's PV "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" to be deleted
Aug 26 18:08:33.548: INFO: Waiting up to 10m0s for PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 to get deleted
Aug 26 18:08:33.664: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 found and phase=Failed (116.479809ms)
Aug 26 18:08:38.781: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 found and phase=Failed (5.233487684s)
Aug 26 18:08:43.898: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 found and phase=Failed (10.350145095s)
Aug 26 18:08:49.017: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 found and phase=Failed (15.469749227s)
Aug 26 18:08:54.138: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 found and phase=Failed (20.59044304s)
Aug 26 18:08:59.255: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 found and phase=Failed (25.707644888s)
Aug 26 18:09:04.373: INFO: PersistentVolume pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 was removed
Aug 26 18:09:04.373: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Aug 26 18:09:04.488: INFO: Claim "azuredisk-5356" in namespace "pvc-rkfpg" doesn't exist in the system
Aug 26 18:09:04.488: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-lz5qv
Aug 26 18:09:04.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 77 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Aug 26 18:09:34.358: INFO: deleting Pod "azuredisk-1957"/"azuredisk-volume-tester-n7szf"
Aug 26 18:09:34.479: INFO: Error getting logs for pod azuredisk-volume-tester-n7szf: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-n7szf)
STEP: Deleting pod azuredisk-volume-tester-n7szf in namespace azuredisk-1957
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:09:34.830: INFO: deleting PVC "azuredisk-1957"/"pvc-8wwzr"
Aug 26 18:09:34.830: INFO: Deleting PersistentVolumeClaim "pvc-8wwzr"
STEP: waiting for claim's PV "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" to be deleted
... skipping 16 lines ...
Aug 26 18:10:51.880: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Bound (1m16.925416553s)
Aug 26 18:10:56.997: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Bound (1m22.042128448s)
Aug 26 18:11:02.114: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Bound (1m27.158637116s)
Aug 26 18:11:07.232: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Bound (1m32.277416057s)
Aug 26 18:11:12.350: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Bound (1m37.395147528s)
Aug 26 18:11:17.469: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Bound (1m42.514441718s)
Aug 26 18:11:22.586: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (1m47.631095924s)
Aug 26 18:11:27.706: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (1m52.750607189s)
Aug 26 18:11:32.823: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (1m57.86778372s)
Aug 26 18:11:37.943: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (2m2.987716725s)
Aug 26 18:11:43.062: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (2m8.10712876s)
Aug 26 18:11:48.179: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (2m13.224343727s)
Aug 26 18:11:53.298: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (2m18.343384255s)
Aug 26 18:11:58.417: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa found and phase=Failed (2m23.461518408s)
Aug 26 18:12:03.537: INFO: PersistentVolume pvc-361e3749-11a7-4fef-821c-07d1ccf656fa was removed
Aug 26 18:12:03.537: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1957 to be removed
Aug 26 18:12:03.652: INFO: Claim "azuredisk-1957" in namespace "pvc-8wwzr" doesn't exist in the system
Aug 26 18:12:03.652: INFO: deleting StorageClass azuredisk-1957-kubernetes.io-azure-disk-dynamic-sc-568c4
Aug 26 18:12:03.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1957" for this suite.
... skipping 21 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 26 18:12:06.485: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nr8qx" in namespace "azuredisk-8705" to be "Succeeded or Failed"
Aug 26 18:12:06.602: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 116.70253ms
Aug 26 18:12:08.719: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233797324s
Aug 26 18:12:10.836: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350306588s
Aug 26 18:12:12.953: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467553433s
Aug 26 18:12:15.071: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.585552839s
Aug 26 18:12:17.187: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.702038394s
Aug 26 18:12:19.304: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 12.818687058s
Aug 26 18:12:21.420: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 14.935051865s
Aug 26 18:12:23.537: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 17.052010293s
Aug 26 18:12:25.658: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 19.17237989s
Aug 26 18:12:27.774: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Pending", Reason="", readiness=false. Elapsed: 21.289011809s
Aug 26 18:12:29.891: INFO: Pod "azuredisk-volume-tester-nr8qx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.406206968s
STEP: Saw pod success
Aug 26 18:12:29.891: INFO: Pod "azuredisk-volume-tester-nr8qx" satisfied condition "Succeeded or Failed"
Aug 26 18:12:29.892: INFO: deleting Pod "azuredisk-8705"/"azuredisk-volume-tester-nr8qx"
Aug 26 18:12:30.022: INFO: Pod azuredisk-volume-tester-nr8qx has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-nr8qx in namespace azuredisk-8705
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:12:30.380: INFO: deleting PVC "azuredisk-8705"/"pvc-dqgfv"
Aug 26 18:12:30.380: INFO: Deleting PersistentVolumeClaim "pvc-dqgfv"
STEP: waiting for claim's PV "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" to be deleted
Aug 26 18:12:30.505: INFO: Waiting up to 10m0s for PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e to get deleted
Aug 26 18:12:30.622: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e found and phase=Failed (116.822401ms)
Aug 26 18:12:35.739: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e found and phase=Failed (5.233201759s)
Aug 26 18:12:40.857: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e found and phase=Failed (10.351445054s)
Aug 26 18:12:45.978: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e found and phase=Failed (15.472929025s)
Aug 26 18:12:51.096: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e found and phase=Failed (20.590426945s)
Aug 26 18:12:56.214: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e found and phase=Failed (25.7088641s)
Aug 26 18:13:01.332: INFO: PersistentVolume pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e was removed
Aug 26 18:13:01.332: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8705 to be removed
Aug 26 18:13:01.447: INFO: Claim "azuredisk-8705" in namespace "pvc-dqgfv" doesn't exist in the system
Aug 26 18:13:01.447: INFO: deleting StorageClass azuredisk-8705-kubernetes.io-azure-disk-dynamic-sc-svk9x
Aug 26 18:13:01.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8705" for this suite.
... skipping 21 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Aug 26 18:13:04.280: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-8g8nw" in namespace "azuredisk-2451" to be "Error status code"
Aug 26 18:13:04.396: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 115.512541ms
Aug 26 18:13:06.514: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233254784s
Aug 26 18:13:08.634: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.353578487s
Aug 26 18:13:10.751: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.470507702s
Aug 26 18:13:12.868: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.587172347s
Aug 26 18:13:14.984: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.703738362s
Aug 26 18:13:17.101: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.820952028s
Aug 26 18:13:19.217: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.936699836s
Aug 26 18:13:21.334: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 17.053945491s
Aug 26 18:13:23.452: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.171213459s
Aug 26 18:13:25.569: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 21.288391388s
Aug 26 18:13:27.685: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Pending", Reason="", readiness=false. Elapsed: 23.404201227s
Aug 26 18:13:29.801: INFO: Pod "azuredisk-volume-tester-8g8nw": Phase="Failed", Reason="", readiness=false. Elapsed: 25.520860595s
STEP: Saw pod failure
Aug 26 18:13:29.801: INFO: Pod "azuredisk-volume-tester-8g8nw" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Aug 26 18:13:29.936: INFO: deleting Pod "azuredisk-2451"/"azuredisk-volume-tester-8g8nw"
Aug 26 18:13:30.055: INFO: Pod azuredisk-volume-tester-8g8nw has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-8g8nw in namespace azuredisk-2451
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:13:30.409: INFO: deleting PVC "azuredisk-2451"/"pvc-v5v7b"
Aug 26 18:13:30.409: INFO: Deleting PersistentVolumeClaim "pvc-v5v7b"
STEP: waiting for claim's PV "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" to be deleted
Aug 26 18:13:30.537: INFO: Waiting up to 10m0s for PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 to get deleted
Aug 26 18:13:30.652: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 found and phase=Failed (115.269562ms)
Aug 26 18:13:35.768: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 found and phase=Failed (5.231693901s)
Aug 26 18:13:40.885: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 found and phase=Failed (10.348114201s)
Aug 26 18:13:46.002: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 found and phase=Failed (15.465459793s)
Aug 26 18:13:51.120: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 found and phase=Failed (20.583333366s)
Aug 26 18:13:56.237: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 found and phase=Failed (25.700525049s)
Aug 26 18:14:01.354: INFO: PersistentVolume pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 was removed
Aug 26 18:14:01.354: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2451 to be removed
Aug 26 18:14:01.469: INFO: Claim "azuredisk-2451" in namespace "pvc-v5v7b" doesn't exist in the system
Aug 26 18:14:01.469: INFO: deleting StorageClass azuredisk-2451-kubernetes.io-azure-disk-dynamic-sc-hzdc7
Aug 26 18:14:01.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2451" for this suite.
... skipping 52 lines ...
Aug 26 18:17:22.074: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Bound (5.232322273s)
Aug 26 18:17:27.193: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Bound (10.351266539s)
Aug 26 18:17:32.310: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Bound (15.467363492s)
Aug 26 18:17:37.428: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Bound (20.586254585s)
Aug 26 18:17:42.548: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Bound (25.705665018s)
Aug 26 18:17:47.664: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Bound (30.821654294s)
Aug 26 18:17:52.780: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (35.938055299s)
Aug 26 18:17:57.898: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (41.055354705s)
Aug 26 18:18:03.014: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (46.17141559s)
Aug 26 18:18:08.133: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (51.291048093s)
Aug 26 18:18:13.254: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (56.411601973s)
Aug 26 18:18:18.374: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (1m1.531945994s)
Aug 26 18:18:23.495: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (1m6.652444562s)
Aug 26 18:18:28.612: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 found and phase=Failed (1m11.76947442s)
Aug 26 18:18:33.728: INFO: PersistentVolume pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 was removed
Aug 26 18:18:33.728: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9828 to be removed
Aug 26 18:18:33.843: INFO: Claim "azuredisk-9828" in namespace "pvc-fnm8w" doesn't exist in the system
Aug 26 18:18:33.843: INFO: deleting StorageClass azuredisk-9828-kubernetes.io-azure-disk-dynamic-sc-qnfbp
Aug 26 18:18:33.961: INFO: deleting Pod "azuredisk-9828"/"azuredisk-volume-tester-66vk2"
Aug 26 18:18:34.109: INFO: Pod azuredisk-volume-tester-66vk2 has the following logs: 
... skipping 7 lines ...
Aug 26 18:18:34.710: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Bound (115.86281ms)
Aug 26 18:18:39.827: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Bound (5.233152229s)
Aug 26 18:18:44.948: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Bound (10.353896104s)
Aug 26 18:18:50.066: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Bound (15.471307054s)
Aug 26 18:18:55.183: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Bound (20.588254704s)
Aug 26 18:19:00.300: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Bound (25.706092941s)
Aug 26 18:19:05.416: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Failed (30.821814788s)
Aug 26 18:19:10.533: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Failed (35.938730011s)
Aug 26 18:19:15.689: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Failed (41.095001608s)
Aug 26 18:19:20.807: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Failed (46.212327076s)
Aug 26 18:19:25.930: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 found and phase=Failed (51.335765777s)
Aug 26 18:19:31.047: INFO: PersistentVolume pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2 was removed
Aug 26 18:19:31.047: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9828 to be removed
Aug 26 18:19:31.162: INFO: Claim "azuredisk-9828" in namespace "pvc-sjc5c" doesn't exist in the system
Aug 26 18:19:31.162: INFO: deleting StorageClass azuredisk-9828-kubernetes.io-azure-disk-dynamic-sc-cw4vd
Aug 26 18:19:31.282: INFO: deleting Pod "azuredisk-9828"/"azuredisk-volume-tester-gq9cw"
Aug 26 18:19:31.422: INFO: Pod azuredisk-volume-tester-gq9cw has the following logs: 
... skipping 8 lines ...
Aug 26 18:19:37.128: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Bound (5.232945104s)
Aug 26 18:19:42.248: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Bound (10.352773662s)
Aug 26 18:19:47.366: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Bound (15.471520579s)
Aug 26 18:19:52.483: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Bound (20.588510342s)
Aug 26 18:19:57.600: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Bound (25.705451187s)
Aug 26 18:20:02.717: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Bound (30.822128303s)
Aug 26 18:20:07.837: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Failed (35.94262763s)
Aug 26 18:20:12.958: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Failed (41.063034319s)
Aug 26 18:20:18.074: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Failed (46.178824458s)
Aug 26 18:20:23.194: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Failed (51.298727138s)
Aug 26 18:20:28.313: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 found and phase=Failed (56.418634576s)
Aug 26 18:20:33.433: INFO: PersistentVolume pvc-eac92eb7-a833-4621-934d-e781bb0d6573 was removed
Aug 26 18:20:33.433: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9828 to be removed
Aug 26 18:20:33.549: INFO: Claim "azuredisk-9828" in namespace "pvc-cg64p" doesn't exist in the system
Aug 26 18:20:33.549: INFO: deleting StorageClass azuredisk-9828-kubernetes.io-azure-disk-dynamic-sc-xj8xg
Aug 26 18:20:33.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9828" for this suite.
... skipping 60 lines ...
Aug 26 18:22:19.499: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Bound (115.811918ms)
Aug 26 18:22:24.616: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Bound (5.232176144s)
Aug 26 18:22:29.732: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Bound (10.348828564s)
Aug 26 18:22:34.848: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Bound (15.464826745s)
Aug 26 18:22:39.966: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Bound (20.581946355s)
Aug 26 18:22:45.087: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Bound (25.703376146s)
Aug 26 18:22:50.204: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Failed (30.820169022s)
Aug 26 18:22:55.321: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Failed (35.937055185s)
Aug 26 18:23:00.439: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Failed (41.054961999s)
Aug 26 18:23:05.557: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Failed (46.17326228s)
Aug 26 18:23:10.673: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 found and phase=Failed (51.289498387s)
Aug 26 18:23:15.791: INFO: PersistentVolume pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 was removed
Aug 26 18:23:15.791: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Aug 26 18:23:15.906: INFO: Claim "azuredisk-1563" in namespace "pvc-t5cbw" doesn't exist in the system
Aug 26 18:23:15.906: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-dsbqs
Aug 26 18:23:16.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 155 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 26 18:23:39.158: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2jrdq" in namespace "azuredisk-552" to be "Succeeded or Failed"
Aug 26 18:23:39.274: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 116.220349ms
Aug 26 18:23:41.394: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235631755s
Aug 26 18:23:43.511: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3534888s
Aug 26 18:23:45.632: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.474068854s
Aug 26 18:23:47.748: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.589936049s
Aug 26 18:23:49.864: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.705858091s
... skipping 9 lines ...
Aug 26 18:24:11.042: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 31.883643924s
Aug 26 18:24:13.160: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.002276278s
Aug 26 18:24:15.277: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.119570518s
Aug 26 18:24:17.395: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Pending", Reason="", readiness=false. Elapsed: 38.237519736s
Aug 26 18:24:19.512: INFO: Pod "azuredisk-volume-tester-2jrdq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 40.354464096s
STEP: Saw pod success
Aug 26 18:24:19.512: INFO: Pod "azuredisk-volume-tester-2jrdq" satisfied condition "Succeeded or Failed"
Aug 26 18:24:19.512: INFO: deleting Pod "azuredisk-552"/"azuredisk-volume-tester-2jrdq"
Aug 26 18:24:19.646: INFO: Pod azuredisk-volume-tester-2jrdq has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-2jrdq in namespace azuredisk-552
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:24:20.004: INFO: deleting PVC "azuredisk-552"/"pvc-wrztz"
Aug 26 18:24:20.004: INFO: Deleting PersistentVolumeClaim "pvc-wrztz"
STEP: waiting for claim's PV "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" to be deleted
Aug 26 18:24:20.123: INFO: Waiting up to 10m0s for PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f to get deleted
Aug 26 18:24:20.243: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (119.340748ms)
Aug 26 18:24:25.362: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (5.238916108s)
Aug 26 18:24:30.481: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (10.357184716s)
Aug 26 18:24:35.599: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (15.47527811s)
Aug 26 18:24:40.716: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (20.592946046s)
Aug 26 18:24:45.837: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (25.713609672s)
Aug 26 18:24:50.956: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (30.832148135s)
Aug 26 18:24:56.073: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (35.949924186s)
Aug 26 18:25:01.191: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (41.067470033s)
Aug 26 18:25:06.309: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (46.185825169s)
Aug 26 18:25:11.429: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (51.305469977s)
Aug 26 18:25:16.548: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (56.424079923s)
Aug 26 18:25:21.666: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (1m1.542102027s)
Aug 26 18:25:26.783: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f found and phase=Failed (1m6.659108816s)
Aug 26 18:25:31.900: INFO: PersistentVolume pvc-25e25903-6ef8-488b-b79a-171f0f80078f was removed
Aug 26 18:25:31.900: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-552 to be removed
Aug 26 18:25:32.016: INFO: Claim "azuredisk-552" in namespace "pvc-wrztz" doesn't exist in the system
Aug 26 18:25:32.016: INFO: deleting StorageClass azuredisk-552-kubernetes.io-azure-disk-dynamic-sc-bj2dk
STEP: validating provisioned PV
STEP: checking the PV
... skipping 49 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 26 18:25:52.575: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-94ctq" in namespace "azuredisk-1351" to be "Succeeded or Failed"
Aug 26 18:25:52.690: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 115.182017ms
Aug 26 18:25:54.807: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231375738s
Aug 26 18:25:56.922: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.347205957s
Aug 26 18:25:59.057: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.482250893s
Aug 26 18:26:01.174: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.599108112s
Aug 26 18:26:03.291: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.715343562s
... skipping 8 lines ...
Aug 26 18:26:22.344: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 29.768334222s
Aug 26 18:26:24.460: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 31.884497042s
Aug 26 18:26:26.576: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 34.00056077s
Aug 26 18:26:28.693: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Pending", Reason="", readiness=false. Elapsed: 36.117278365s
Aug 26 18:26:30.810: INFO: Pod "azuredisk-volume-tester-94ctq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.234906614s
STEP: Saw pod success
Aug 26 18:26:30.810: INFO: Pod "azuredisk-volume-tester-94ctq" satisfied condition "Succeeded or Failed"
Aug 26 18:26:30.810: INFO: deleting Pod "azuredisk-1351"/"azuredisk-volume-tester-94ctq"
Aug 26 18:26:30.943: INFO: Pod azuredisk-volume-tester-94ctq has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.070363 seconds, 1.4GB/s
hello world

STEP: Deleting pod azuredisk-volume-tester-94ctq in namespace azuredisk-1351
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:26:31.298: INFO: deleting PVC "azuredisk-1351"/"pvc-gptqs"
Aug 26 18:26:31.298: INFO: Deleting PersistentVolumeClaim "pvc-gptqs"
STEP: waiting for claim's PV "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" to be deleted
Aug 26 18:26:31.415: INFO: Waiting up to 10m0s for PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 to get deleted
Aug 26 18:26:31.531: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (116.022886ms)
Aug 26 18:26:36.648: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (5.233125317s)
Aug 26 18:26:41.765: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (10.349836713s)
Aug 26 18:26:46.883: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (15.467633112s)
Aug 26 18:26:52.002: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (20.586834227s)
Aug 26 18:26:57.122: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (25.70713178s)
Aug 26 18:27:02.238: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (30.823375055s)
Aug 26 18:27:07.356: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (35.941148289s)
Aug 26 18:27:12.476: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 found and phase=Failed (41.060807833s)
Aug 26 18:27:17.592: INFO: PersistentVolume pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 was removed
Aug 26 18:27:17.592: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1351 to be removed
Aug 26 18:27:17.708: INFO: Claim "azuredisk-1351" in namespace "pvc-gptqs" doesn't exist in the system
Aug 26 18:27:17.708: INFO: deleting StorageClass azuredisk-1351-kubernetes.io-azure-disk-dynamic-sc-b94t2
STEP: validating provisioned PV
STEP: checking the PV
... skipping 94 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Aug 26 18:27:36.688: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-snnkl" in namespace "azuredisk-9267" to be "Succeeded or Failed"
Aug 26 18:27:36.804: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 115.851056ms
Aug 26 18:27:38.922: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233377947s
Aug 26 18:27:41.039: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350732859s
Aug 26 18:27:43.157: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.46834417s
Aug 26 18:27:45.273: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 8.58487206s
Aug 26 18:27:47.390: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 10.702114712s
... skipping 24 lines ...
Aug 26 18:28:40.409: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.721074073s
Aug 26 18:28:42.530: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.841647421s
Aug 26 18:28:44.648: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.959687898s
Aug 26 18:28:46.765: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Pending", Reason="", readiness=false. Elapsed: 1m10.076280953s
Aug 26 18:28:48.882: INFO: Pod "azuredisk-volume-tester-snnkl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.194081731s
STEP: Saw pod success
Aug 26 18:28:48.883: INFO: Pod "azuredisk-volume-tester-snnkl" satisfied condition "Succeeded or Failed"
Aug 26 18:28:48.883: INFO: deleting Pod "azuredisk-9267"/"azuredisk-volume-tester-snnkl"
Aug 26 18:28:49.013: INFO: Pod azuredisk-volume-tester-snnkl has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-snnkl in namespace azuredisk-9267
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:28:49.379: INFO: deleting PVC "azuredisk-9267"/"pvc-zm2hj"
Aug 26 18:28:49.379: INFO: Deleting PersistentVolumeClaim "pvc-zm2hj"
STEP: waiting for claim's PV "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" to be deleted
Aug 26 18:28:49.497: INFO: Waiting up to 10m0s for PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 to get deleted
Aug 26 18:28:49.613: INFO: PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 found and phase=Failed (115.353206ms)
Aug 26 18:28:54.729: INFO: PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 found and phase=Failed (5.231478487s)
Aug 26 18:28:59.845: INFO: PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 found and phase=Failed (10.34737569s)
Aug 26 18:29:04.961: INFO: PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 found and phase=Failed (15.463767994s)
Aug 26 18:29:10.079: INFO: PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 found and phase=Failed (20.581849944s)
Aug 26 18:29:15.195: INFO: PersistentVolume pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 was removed
Aug 26 18:29:15.195: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9267 to be removed
Aug 26 18:29:15.311: INFO: Claim "azuredisk-9267" in namespace "pvc-zm2hj" doesn't exist in the system
Aug 26 18:29:15.311: INFO: deleting StorageClass azuredisk-9267-kubernetes.io-azure-disk-dynamic-sc-r5zl7
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:29:15.662: INFO: deleting PVC "azuredisk-9267"/"pvc-mrnvx"
Aug 26 18:29:15.662: INFO: Deleting PersistentVolumeClaim "pvc-mrnvx"
STEP: waiting for claim's PV "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" to be deleted
Aug 26 18:29:15.780: INFO: Waiting up to 10m0s for PersistentVolume pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6 to get deleted
Aug 26 18:29:15.895: INFO: PersistentVolume pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6 found and phase=Failed (115.317056ms)
Aug 26 18:29:21.013: INFO: PersistentVolume pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6 found and phase=Failed (5.23310479s)
Aug 26 18:29:26.132: INFO: PersistentVolume pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6 found and phase=Failed (10.352111916s)
Aug 26 18:29:31.248: INFO: PersistentVolume pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6 was removed
Aug 26 18:29:31.248: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9267 to be removed
Aug 26 18:29:31.364: INFO: Claim "azuredisk-9267" in namespace "pvc-mrnvx" doesn't exist in the system
Aug 26 18:29:31.364: INFO: deleting StorageClass azuredisk-9267-kubernetes.io-azure-disk-dynamic-sc-wc6js
STEP: validating provisioned PV
STEP: checking the PV
Aug 26 18:29:31.714: INFO: deleting PVC "azuredisk-9267"/"pvc-x7xd6"
Aug 26 18:29:31.714: INFO: Deleting PersistentVolumeClaim "pvc-x7xd6"
STEP: waiting for claim's PV "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" to be deleted
Aug 26 18:29:31.831: INFO: Waiting up to 10m0s for PersistentVolume pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a to get deleted
Aug 26 18:29:31.946: INFO: PersistentVolume pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a found and phase=Failed (114.607816ms)
Aug 26 18:29:37.062: INFO: PersistentVolume pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a found and phase=Failed (5.230767559s)
Aug 26 18:29:42.181: INFO: PersistentVolume pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a found and phase=Failed (10.349255744s)
Aug 26 18:29:47.297: INFO: PersistentVolume pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a was removed
Aug 26 18:29:47.297: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9267 to be removed
Aug 26 18:29:47.412: INFO: Claim "azuredisk-9267" in namespace "pvc-x7xd6" doesn't exist in the system
Aug 26 18:29:47.412: INFO: deleting StorageClass azuredisk-9267-kubernetes.io-azure-disk-dynamic-sc-j7g2k
Aug 26 18:29:47.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9267" for this suite.
... skipping 164 lines ...
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:40
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:43
    should detach disk after pod deleted [disk.csi.azure.com] [Windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:790

    Unexpected error:
        <*errors.errorString | 0xc000200430>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

... skipping 287 lines ...
I0826 18:02:43.565297       1 tlsconfig.go:178] "Loaded client CA" index=1 certName="client-ca-bundle::/etc/kubernetes/pki/ca.crt,request-header::/etc/kubernetes/pki/front-proxy-ca.crt" certDetail="\"kubernetes\" [] issuer=\"<self>\" (2021-08-26 17:55:28 +0000 UTC to 2031-08-24 18:00:28 +0000 UTC (now=2021-08-26 18:02:43.565269379 +0000 UTC))"
I0826 18:02:43.565765       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1630000962\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1630000961\" (2021-08-26 17:02:40 +0000 UTC to 2022-08-26 17:02:40 +0000 UTC (now=2021-08-26 18:02:43.565726469 +0000 UTC))"
I0826 18:02:43.566165       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1630000963\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1630000962\" (2021-08-26 17:02:42 +0000 UTC to 2022-08-26 17:02:42 +0000 UTC (now=2021-08-26 18:02:43.56613546 +0000 UTC))"
I0826 18:02:43.566380       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0826 18:02:43.566598       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0826 18:02:43.567555       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0826 18:02:46.856562       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0826 18:02:46.856666       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0826 18:02:49.296065       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0826 18:02:49.299605       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-z3rmsd-control-plane-799m2_7423bd81-238a-4f06-ba3a-12f03b1dacbf became leader"
W0826 18:02:49.356053       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0826 18:02:49.357464       1 azure_auth.go:232] Using AzurePublicCloud environment
I0826 18:02:49.357600       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0826 18:02:49.357784       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0826 18:02:49.361058       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0826 18:02:49.361412       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0826 18:02:49.361738       1 reflector.go:219] Starting reflector *v1.Node (12h21m23.141014346s) from k8s.io/client-go/informers/factory.go:134
I0826 18:02:49.361890       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0826 18:02:49.362582       1 reflector.go:219] Starting reflector *v1.ServiceAccount (12h21m23.141014346s) from k8s.io/client-go/informers/factory.go:134
I0826 18:02:49.362747       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
W0826 18:02:49.391237       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0826 18:02:49.391267       1 controllermanager.go:562] Starting "csrsigning"
I0826 18:02:49.407723       1 dynamic_serving_content.go:110] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0826 18:02:49.408229       1 dynamic_serving_content.go:110] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0826 18:02:49.408673       1 dynamic_serving_content.go:110] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0826 18:02:49.409075       1 dynamic_serving_content.go:110] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
I0826 18:02:49.409320       1 controllermanager.go:577] Started "csrsigning"
... skipping 193 lines ...
I0826 18:02:52.166222       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0826 18:02:52.166366       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0826 18:02:52.166435       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0826 18:02:52.166462       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0826 18:02:52.166508       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0826 18:02:52.166524       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0826 18:02:52.166586       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0826 18:02:52.166601       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0826 18:02:52.166704       1 controllermanager.go:577] Started "persistentvolume-binder"
I0826 18:02:52.166723       1 controllermanager.go:562] Starting "ttl-after-finished"
I0826 18:02:52.166806       1 pv_controller_base.go:308] Starting persistent volume controller
I0826 18:02:52.166818       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0826 18:02:52.316263       1 controllermanager.go:577] Started "ttl-after-finished"
... skipping 38 lines ...
I0826 18:02:53.567974       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0826 18:02:53.567998       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0826 18:02:53.568013       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0826 18:02:53.568025       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0826 18:02:53.568041       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0826 18:02:53.568053       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0826 18:02:53.568073       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0826 18:02:53.568084       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0826 18:02:53.568349       1 controllermanager.go:577] Started "attachdetach"
I0826 18:02:53.568365       1 controllermanager.go:562] Starting "serviceaccount"
I0826 18:02:53.568425       1 attach_detach_controller.go:328] Starting attach detach controller
I0826 18:02:53.568436       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0826 18:02:53.716185       1 controllermanager.go:577] Started "serviceaccount"
... skipping 301 lines ...
I0826 18:03:35.452369       1 controller.go:269] Triggering nodeSync
I0826 18:03:35.452455       1 controller.go:288] nodeSync has been triggered
I0826 18:03:35.452504       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0826 18:03:35.452606       1 controller.go:779] Finished updateLoadBalancerHosts
I0826 18:03:35.452685       1 controller.go:720] It took 0.000182398 seconds to finish nodeSyncInternal
I0826 18:03:35.452964       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-control-plane-799m2"
W0826 18:03:35.453195       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-z3rmsd-control-plane-799m2" does not exist
I0826 18:03:35.541825       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-control-plane-799m2"
I0826 18:03:35.553612       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-control-plane-799m2"
I0826 18:03:35.554179       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-z3rmsd-control-plane-799m2" new_ttl="0s"
I0826 18:03:35.963800       1 certificate_controller.go:82] Adding certificate request csr-4n2lq
I0826 18:03:35.963825       1 certificate_controller.go:82] Adding certificate request csr-4n2lq
I0826 18:03:35.963863       1 certificate_controller.go:173] Finished syncing certificate request "csr-4n2lq" (8.1µs)
I0826 18:03:35.963871       1 certificate_controller.go:173] Finished syncing certificate request "csr-4n2lq" (5.9µs)
I0826 18:03:35.963883       1 certificate_controller.go:82] Adding certificate request csr-4n2lq
I0826 18:03:35.963908       1 certificate_controller.go:82] Adding certificate request csr-4n2lq
I0826 18:03:35.963985       1 certificate_controller.go:173] Finished syncing certificate request "csr-4n2lq" (3.4µs)
I0826 18:03:35.963817       1 certificate_controller.go:82] Adding certificate request csr-4n2lq
I0826 18:03:35.964523       1 certificate_controller.go:173] Finished syncing certificate request "csr-4n2lq" (3.4µs)
I0826 18:03:35.980227       1 certificate_controller.go:173] Finished syncing certificate request "csr-4n2lq" (16.286414ms)
I0826 18:03:35.980265       1 certificate_controller.go:151] Sync csr-4n2lq failed with : recognized csr "csr-4n2lq" as [selfnodeclient nodeclient] but subject access review was not approved
I0826 18:03:36.059151       1 controller.go:269] Triggering nodeSync
I0826 18:03:36.059418       1 controller.go:288] nodeSync has been triggered
I0826 18:03:36.059725       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0826 18:03:36.060018       1 controller.go:779] Finished updateLoadBalancerHosts
I0826 18:03:36.060374       1 controller.go:720] It took 0.000613893 seconds to finish nodeSyncInternal
I0826 18:03:36.060310       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-control-plane-799m2"
... skipping 33 lines ...
I0826 18:03:36.695659       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0826 18:03:36.758432       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2021-08-26 18:03:36.693723028 +0000 UTC m=+56.552477841 - now: 2021-08-26 18:03:36.758414596 +0000 UTC m=+56.617169509]
I0826 18:03:36.761021       1 azure_backoff.go:109] VirtualMachinesClient.List(capz-z3rmsd) success
I0826 18:03:36.767531       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0826 18:03:36.797198       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (121.686724ms)
I0826 18:03:36.810528       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="174.907022ms"
I0826 18:03:36.810562       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0826 18:03:36.810608       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2021-08-26 18:03:36.810585306 +0000 UTC m=+56.669340119"
I0826 18:03:36.811584       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2021-08-26 18:03:36 +0000 UTC - now: 2021-08-26 18:03:36.811560095 +0000 UTC m=+56.670315008]
I0826 18:03:36.822008       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (146.73624ms)
I0826 18:03:36.822321       1 endpointslicemirroring_controller.go:274] syncEndpoints("kube-system/kube-dns")
I0826 18:03:36.822478       1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (159.199µs)
I0826 18:03:36.829623       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="19.000685ms"
... skipping 215 lines ...
I0826 18:03:51.053870       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:03:51.054544       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:03:51.054723       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:03:51.054878       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (10.852702ms)
I0826 18:03:51.058683       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2021-08-26 18:03:50.902558216 +0000 UTC m=+70.761313029 - now: 2021-08-26 18:03:51.058676902 +0000 UTC m=+70.917431715]
I0826 18:03:51.118479       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="376.848484ms"
I0826 18:03:51.118512       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0826 18:03:51.118543       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-08-26 18:03:51.118526762 +0000 UTC m=+70.977281575"
I0826 18:03:51.118892       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2021-08-26 18:03:50 +0000 UTC - now: 2021-08-26 18:03:51.118887859 +0000 UTC m=+70.977642672]
I0826 18:03:51.150678       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/calico-kube-controllers-846b5f484d-tfhs9"
I0826 18:03:51.151703       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-846b5f484d-tfhs9" podUID=e31e2976-f886-4d9a-b9d0-f61e6792c128
I0826 18:03:51.152185       1 replica_set.go:380] Pod calico-kube-controllers-846b5f484d-tfhs9 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"calico-kube-controllers-846b5f484d-tfhs9", GenerateName:"calico-kube-controllers-846b5f484d-", Namespace:"kube-system", SelfLink:"", UID:"e31e2976-f886-4d9a-b9d0-f61e6792c128", ResourceVersion:"563", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63765597831, loc:(*time.Location)(0x7505dc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"calico-kube-controllers", "pod-template-hash":"846b5f484d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"calico-kube-controllers-846b5f484d", UID:"d9dd5a6e-103d-403c-a7bd-8998b42cf0fd", Controller:(*bool)(0xc000caa97e), BlockOwnerDeletion:(*bool)(0xc000caa97f)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc001c0c288), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001c0c2a0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-k8p55", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc000f00be0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"calico-kube-controllers", Image:"calico/kube-controllers:v3.20.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ENABLED_CONTROLLERS", Value:"node", ValueFrom:(*v1.EnvVarSource)(nil)}, v1.EnvVar{Name:"DATASTORE_TYPE", Value:"kubernetes", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-k8p55", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0019ee9c0), ReadinessProbe:(*v1.Probe)(0xc0019eea00), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc000caaa30), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"calico-kube-controllers", DeprecatedServiceAccount:"calico-kube-controllers", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00022f730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000caaa80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc000caaaa0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc000caaaa8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc000caaaac), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001bdbbf0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0826 18:03:51.152593       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-846b5f484d", timestamp:time.Time{wall:0xc04213c1b5df7039, ext:70762590518, loc:(*time.Location)(0x7505dc0)}}
... skipping 12 lines ...
I0826 18:03:51.315494       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-846b5f484d-tfhs9" podUID=e31e2976-f886-4d9a-b9d0-f61e6792c128
I0826 18:03:51.315650       1 replica_set.go:443] Pod calico-kube-controllers-846b5f484d-tfhs9 updated, objectMeta {Name:calico-kube-controllers-846b5f484d-tfhs9 GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:e31e2976-f886-4d9a-b9d0-f61e6792c128 ResourceVersion:563 Generation:0 CreationTimestamp:2021-08-26 18:03:51 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:d9dd5a6e-103d-403c-a7bd-8998b42cf0fd Controller:0xc000caa97e BlockOwnerDeletion:0xc000caa97f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-08-26 18:03:50 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9dd5a6e-103d-403c-a7bd-8998b42cf0fd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-846b5f484d-tfhs9 GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:e31e2976-f886-4d9a-b9d0-f61e6792c128 ResourceVersion:569 Generation:0 CreationTimestamp:2021-08-26 18:03:51 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:d9dd5a6e-103d-403c-a7bd-8998b42cf0fd Controller:0xc000cabbc7 BlockOwnerDeletion:0xc000cabbc8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-08-26 18:03:50 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d9dd5a6e-103d-403c-a7bd-8998b42cf0fd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-08-26 18:03:51 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0826 18:03:51.315933       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-846b5f484d-tfhs9"
I0826 18:03:51.334793       1 disruption.go:433] updatePod "calico-kube-controllers-846b5f484d-tfhs9" -> PDB "calico-kube-controllers"
I0826 18:03:51.316197       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-846b5f484d" (412.526174ms)
I0826 18:03:51.316309       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="73.592436ms"
I0826 18:03:51.336588       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0826 18:03:51.336792       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-08-26 18:03:51.336748595 +0000 UTC m=+71.195503408"
I0826 18:03:51.337486       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-846b5f484d", timestamp:time.Time{wall:0xc04213c1b5df7039, ext:70762590518, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:51.338127       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-846b5f484d, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0826 18:03:51.342348       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2021-08-26 18:03:50 +0000 UTC - now: 2021-08-26 18:03:51.342342244 +0000 UTC m=+71.201097157]
I0826 18:03:51.342596       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 598s
I0826 18:03:51.342779       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="6.017745ms"
... skipping 9 lines ...
I0826 18:03:51.767260       1 disruption.go:391] update DB "calico-kube-controllers"
I0826 18:03:51.772588       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (496.280726ms)
I0826 18:03:51.779438       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (60.599µs)
I0826 18:03:51.805234       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="245.684585ms"
I0826 18:03:51.805424       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0826 18:03:51.807280       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-08-26 18:03:51.806559959 +0000 UTC m=+71.665314772"
E0826 18:03:51.811565       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:51.814290       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:51.814435       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:51.815371       1 deployment_util.go:808] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2021-08-26 18:03:51 +0000 UTC - now: 2021-08-26 18:03:51.815364779 +0000 UTC m=+71.674119592]
I0826 18:03:51.815522       1 progress.go:195] Queueing up deployment "calico-kube-controllers" for a progress check after 599s
I0826 18:03:51.815628       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="9.056718ms"
I0826 18:03:51.805601       1 disruption.go:427] updatePod called on pod "etcd-capz-z3rmsd-control-plane-799m2"
I0826 18:03:51.815850       1 disruption.go:490] No PodDisruptionBudgets found for pod etcd-capz-z3rmsd-control-plane-799m2, PodDisruptionBudget controller will avoid syncing.
I0826 18:03:51.815975       1 disruption.go:430] No matching pdb for pod "etcd-capz-z3rmsd-control-plane-799m2"
E0826 18:03:51.816196       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:51.816275       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:51.816371       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:51.933395       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:03:51.933902       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:03:52.519557       1 disruption.go:427] updatePod called on pod "kube-scheduler-capz-z3rmsd-control-plane-799m2"
I0826 18:03:52.521237       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-scheduler-capz-z3rmsd-control-plane-799m2, PodDisruptionBudget controller will avoid syncing.
I0826 18:03:52.521800       1 disruption.go:430] No matching pdb for pod "kube-scheduler-capz-z3rmsd-control-plane-799m2"
E0826 18:03:52.530090       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.530105       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.530128       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:52.774790       1 disruption.go:427] updatePod called on pod "calico-node-k6lkb"
I0826 18:03:52.774980       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-k6lkb, PodDisruptionBudget controller will avoid syncing.
I0826 18:03:52.775356       1 disruption.go:430] No matching pdb for pod "calico-node-k6lkb"
I0826 18:03:52.775116       1 daemon_controller.go:570] Pod calico-node-k6lkb updated.
E0826 18:03:52.775890       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.775973       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.776075       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:52.777247       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c1c2ca4335, ext:70905564722, loc:(*time.Location)(0x7505dc0)}}
E0826 18:03:52.789456       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.789613       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.789717       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:52.790383       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c22f1c36a0, ext:72649133057, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:52.791500       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:03:52.791862       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:03:52.793682       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c22f1c36a0, ext:72649133057, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:52.799454       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c22fa6a36b, ext:72658204776, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:52.799780       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:03:52.800113       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:03:52.800337       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:03:52.800870       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (25.318373ms)
E0826 18:03:52.807567       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.810569       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.815292       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.815706       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.815810       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.815918       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.821593       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.821796       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.821972       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.822489       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.845052       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.861120       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.863894       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.913528       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.917554       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.919844       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.919946       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.920152       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.921122       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.921242       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.921378       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.921879       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.925416       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.925619       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.926491       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.960170       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.962739       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:52.967615       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:52.971728       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:52.972309       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:53.257998       1 disruption.go:427] updatePod called on pod "kube-scheduler-capz-z3rmsd-control-plane-799m2"
I0826 18:03:53.258304       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-scheduler-capz-z3rmsd-control-plane-799m2, PodDisruptionBudget controller will avoid syncing.
I0826 18:03:53.258438       1 disruption.go:430] No matching pdb for pod "kube-scheduler-capz-z3rmsd-control-plane-799m2"
E0826 18:03:53.259009       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:53.259106       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:53.259276       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:53.771523       1 gc_controller.go:161] GC'ing orphaned
I0826 18:03:53.771682       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:03:53.774094       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:03:53.781704       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:03:53.782738       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:03:53.842768       1 node_lifecycle_controller.go:869] Node capz-z3rmsd-control-plane-799m2 is NotReady as of 2021-08-26 18:03:53.842651597 +0000 UTC m=+73.701406510. Adding it to the Taint queue.
... skipping 24 lines ...
I0826 18:03:54.528524       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0826 18:03:54.528648       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213c29f7d30a0, ext:74387053057, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:54.528795       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213c29f84b9eb, ext:74387546856, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:54.528891       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0826 18:03:54.528998       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0826 18:03:54.529126       1 daemon_controller.go:1102] Updating daemon set status
E0826 18:03:54.529895       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:54.530001       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:54.530137       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:54.574568       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="46.9µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:40890" resp=200
I0826 18:03:54.610227       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:54.630597       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (102.858458ms)
I0826 18:03:54.630760       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0826 18:03:54.631264       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213c29f84b9eb, ext:74387546856, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:54.631441       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213c2a5a2f72e, ext:74490192015, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:54.631531       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0826 18:03:54.631637       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0826 18:03:54.631717       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213c2a5a2f72e, ext:74490192015, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:54.631830       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213c2a5a8ea9d, ext:74490581914, loc:(*time.Location)(0x7505dc0)}}
I0826 18:03:54.631918       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0826 18:03:54.632224       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
E0826 18:03:54.636724       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:54.640156       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:54.642444       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:54.642809       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:03:54.642958       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (12.098383ms)
E0826 18:03:54.645482       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:54.645495       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:54.645513       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:54.707453       1 garbagecollector.go:213] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0826 18:03:54.707733       1 garbagecollector.go:219] reset restmapper
I0826 18:03:54.713950       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
E0826 18:03:54.736974       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:54.736997       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:54.737155       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:54.810576       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:54.982355       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.010637       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.067031       1 disruption.go:427] updatePod called on pod "kube-controller-manager-capz-z3rmsd-control-plane-799m2"
I0826 18:03:55.068074       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-controller-manager-capz-z3rmsd-control-plane-799m2, PodDisruptionBudget controller will avoid syncing.
I0826 18:03:55.070036       1 disruption.go:430] No matching pdb for pod "kube-controller-manager-capz-z3rmsd-control-plane-799m2"
E0826 18:03:55.075441       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.075584       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.075773       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.076108       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.078125       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.078244       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.078568       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.078659       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.078777       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.079137       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.079290       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.079399       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.079707       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.079792       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.079897       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.080160       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.080246       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.080345       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.080658       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.080743       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.080843       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:55.153611       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.167132       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.167272       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:55.167481       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.212827       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.310865       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.374926       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=bgpconfigurations", kind "crd.projectcalico.org/v1, Kind=BGPConfiguration"
I0826 18:03:55.381222       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=bgppeers", kind "crd.projectcalico.org/v1, Kind=BGPPeer"
I0826 18:03:55.383859       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=globalnetworksets", kind "crd.projectcalico.org/v1, Kind=GlobalNetworkSet"
... skipping 45 lines ...
I0826 18:03:55.511631       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.591068       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=hostendpoints
I0826 18:03:55.611691       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0826 18:03:55.675933       1 disruption.go:427] updatePod called on pod "kube-apiserver-capz-z3rmsd-control-plane-799m2"
I0826 18:03:55.676143       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-apiserver-capz-z3rmsd-control-plane-799m2, PodDisruptionBudget controller will avoid syncing.
I0826 18:03:55.676279       1 disruption.go:430] No matching pdb for pod "kube-apiserver-capz-z3rmsd-control-plane-799m2"
E0826 18:03:55.676886       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.677150       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.677452       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:55.689599       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamhandles
I0826 18:03:55.716160       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.790315       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgppeers
I0826 18:03:55.810321       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
E0826 18:03:55.851620       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:55.851637       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:55.851669       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:55.903670       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=globalnetworksets
I0826 18:03:55.910391       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:55.990513       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0826 18:03:56.009636       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0826 18:03:56.091293       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgppeers
I0826 18:03:56.110454       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
E0826 18:03:56.173235       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:56.174045       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:56.174707       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:56.190854       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamblocks
I0826 18:03:56.210540       1 resource_quota_monitor.go:294] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0826 18:03:56.290244       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamhandles
E0826 18:03:56.301073       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:56.301097       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:56.301126       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:56.304856       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:56.307012       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:56.307215       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:56.309784       1 shared_informer.go:270] caches populated
I0826 18:03:56.309818       1 shared_informer.go:247] Caches are synced for resource quota 
I0826 18:03:56.309825       1 resource_quota_controller.go:454] synced quota controller
I0826 18:03:56.391443       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=globalnetworksets
E0826 18:03:56.475323       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:56.475398       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:56.475430       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:56.489692       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=globalnetworkpolicies
I0826 18:03:56.590558       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=globalnetworksets
I0826 18:03:56.692585       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:03:56.790470       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations
I0826 18:03:56.889636       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:03:56.935649       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
... skipping 11 lines ...
I0826 18:03:57.719281       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:03:57.790342       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:03:57.890568       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:03:57.992491       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamblocks
I0826 18:03:58.094622       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamconfigs
I0826 18:03:58.190420       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
E0826 18:03:58.218901       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:58.220882       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:58.221574       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:58.298071       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamconfigs
I0826 18:03:58.390522       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamconfigs
I0826 18:03:58.490240       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
E0826 18:03:58.536031       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:58.537710       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:58.539241       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:58.540620       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:58.546411       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:58.549572       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:58.564291       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:58.564416       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:58.564558       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:58.578523       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:58.578653       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:58.578756       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:03:58.586585       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:03:58.591262       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:03:58.591379       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:03:58.591491       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=clusterinformations
I0826 18:03:58.724644       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-control-plane-799m2"
I0826 18:03:58.750598       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:03:58.790578       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=hostendpoints
I0826 18:03:58.848566       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-control-plane-799m2 ReadyCondition updated. Updating timestamp.
I0826 18:03:58.854342       1 node_lifecycle_controller.go:869] Node capz-z3rmsd-control-plane-799m2 is NotReady as of 2021-08-26 18:03:58.854332337 +0000 UTC m=+78.713087250. Adding it to the Taint queue.
... skipping 14 lines ...
I0826 18:04:00.189796       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:04:00.318096       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:04:00.402346       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:04:00.442744       1 disruption.go:427] updatePod called on pod "kube-controller-manager-capz-z3rmsd-control-plane-799m2"
I0826 18:04:00.462923       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-controller-manager-capz-z3rmsd-control-plane-799m2, PodDisruptionBudget controller will avoid syncing.
I0826 18:04:00.464368       1 disruption.go:430] No matching pdb for pod "kube-controller-manager-capz-z3rmsd-control-plane-799m2"
E0826 18:04:00.467089       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.478887       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.479035       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:00.480244       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.489286       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
I0826 18:04:00.492109       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamconfigs
E0826 18:04:00.496265       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:00.587831       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.587980       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.588112       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:00.588472       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.588553       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.588642       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:00.588929       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.589006       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.589091       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:00.589444       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.589946       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.590025       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:04:00.590499       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
E0826 18:04:00.601006       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.601145       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.601267       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:00.601592       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:00.601669       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:00.601750       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:04:00.690541       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamconfigs
I0826 18:04:00.792375       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
I0826 18:04:00.889899       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
I0826 18:04:00.992447       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamconfigs
I0826 18:04:01.090222       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ipamblocks
I0826 18:04:01.189711       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=felixconfigurations
... skipping 98 lines ...
I0826 18:04:10.136267       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:04:10.136424       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:04:10.136557       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (6.369773ms)
I0826 18:04:10.136735       1 disruption.go:427] updatePod called on pod "calico-node-k6lkb"
I0826 18:04:10.136897       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-k6lkb, PodDisruptionBudget controller will avoid syncing.
I0826 18:04:10.137000       1 disruption.go:430] No matching pdb for pod "calico-node-k6lkb"
E0826 18:04:10.517974       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.518015       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.518044       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.558473       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.558561       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.558623       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.646538       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.646561       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.646591       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.947421       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.947442       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.947474       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.947958       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.947970       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.947988       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.948305       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.948394       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.948515       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.948848       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.948989       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.949104       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.949506       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.949769       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.950534       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.952080       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.952637       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.953033       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.957262       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.957460       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.957668       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.958033       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.958160       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.958295       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:10.958685       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:10.958781       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:10.958916       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:04:11.940730       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:04:11.940819       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:04:13.748899       1 disruption.go:427] updatePod called on pod "calico-node-k6lkb"
I0826 18:04:13.749322       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-k6lkb, PodDisruptionBudget controller will avoid syncing.
I0826 18:04:13.749431       1 disruption.go:430] No matching pdb for pod "calico-node-k6lkb"
I0826 18:04:13.749561       1 daemon_controller.go:570] Pod calico-node-k6lkb updated.
... skipping 4 lines ...
I0826 18:04:13.751829       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c76cc97f7d, ext:93610157790, loc:(*time.Location)(0x7505dc0)}}
I0826 18:04:13.751965       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c76cd207d4, ext:93610716881, loc:(*time.Location)(0x7505dc0)}}
I0826 18:04:13.752074       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:04:13.752235       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:04:13.752355       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:04:13.752507       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.840953ms)
E0826 18:04:13.761845       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.762028       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.762221       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.762682       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.762790       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.762964       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.763453       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.763553       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.763687       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.764177       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.764327       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.764450       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.764813       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.764934       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.765076       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.765484       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.765597       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.765776       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.769787       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.772654       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
I0826 18:04:13.772431       1 gc_controller.go:161] GC'ing orphaned
I0826 18:04:13.774213       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
E0826 18:04:13.774149       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.774690       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.774814       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.774943       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.775313       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.775410       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.775558       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.775961       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.776065       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.776298       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.776708       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.776809       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.776924       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:13.777310       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:13.777406       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:13.777521       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:04:13.939254       1 node_lifecycle_controller.go:869] Node capz-z3rmsd-control-plane-799m2 is NotReady as of 2021-08-26 18:04:13.9391788 +0000 UTC m=+93.797933713. Adding it to the Taint queue.
I0826 18:04:16.380199       1 disruption.go:427] updatePod called on pod "calico-node-k6lkb"
I0826 18:04:16.380472       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-k6lkb, PodDisruptionBudget controller will avoid syncing.
I0826 18:04:16.380811       1 disruption.go:430] No matching pdb for pod "calico-node-k6lkb"
I0826 18:04:16.381135       1 daemon_controller.go:570] Pod calico-node-k6lkb updated.
I0826 18:04:16.382999       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c76cd207d4, ext:93610716881, loc:(*time.Location)(0x7505dc0)}}
... skipping 3 lines ...
I0826 18:04:16.386371       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c816d84641, ext:96242027526, loc:(*time.Location)(0x7505dc0)}}
I0826 18:04:16.386592       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c8170adb02, ext:96245342307, loc:(*time.Location)(0x7505dc0)}}
I0826 18:04:16.386689       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:04:16.386830       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:04:16.387019       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:04:16.387151       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (5.851565ms)
E0826 18:04:16.387650       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.387750       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.387910       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.388220       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.388262       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.388279       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.388473       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.388484       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.388499       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.388886       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.388992       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.389094       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.389377       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.389478       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.389597       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.389872       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.390002       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.390092       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.390410       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.390420       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.390435       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.390617       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.390624       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.390640       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.390960       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.391141       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.391284       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.391617       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.391712       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.391851       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.392308       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.392404       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.392545       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:16.393873       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:16.394089       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:16.394310       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:04:16.942477       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:04:16.942617       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:04:18.942406       1 node_lifecycle_controller.go:869] Node capz-z3rmsd-control-plane-799m2 is NotReady as of 2021-08-26 18:04:18.942389181 +0000 UTC m=+98.801144094. Adding it to the Taint queue.
I0826 18:04:18.991899       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-control-plane-799m2"
I0826 18:04:19.035302       1 disruption.go:427] updatePod called on pod "calico-node-k6lkb"
I0826 18:04:19.035612       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-k6lkb, PodDisruptionBudget controller will avoid syncing.
... skipping 6 lines ...
I0826 18:04:19.046160       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c8c265d18a, ext:98898982123, loc:(*time.Location)(0x7505dc0)}}
I0826 18:04:19.046606       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213c8c2c713d5, ext:98905355986, loc:(*time.Location)(0x7505dc0)}}
I0826 18:04:19.046742       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:04:19.046893       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:04:19.047023       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:04:19.047173       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (11.219935ms)
E0826 18:04:19.049162       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.049319       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.049459       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.054466       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.055212       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.055985       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.057569       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.057700       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.057879       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.058801       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.059256       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.059403       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.059817       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.059929       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.060046       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.060363       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.060450       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.060550       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.060841       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.060925       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.061029       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.061400       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.061503       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.061612       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.061956       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.062041       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.062136       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.062486       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.062587       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.062748       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.064628       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.065616       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.066910       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0826 18:04:19.068585       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0826 18:04:19.068727       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0826 18:04:19.068823       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0826 18:04:19.509078       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="69.599µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41072" resp=200
I0826 18:04:21.942949       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:04:21.943759       1 azure_vmss.go:343] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-control-plane-799m2), assuming it is managed by availability set: not a vmss instance
I0826 18:04:23.996310       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:04:23.996322       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:04:23.996350       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 74 lines ...
I0826 18:04:33.774752       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:04:33.819084       1 controller.go:269] Triggering nodeSync
I0826 18:04:33.825585       1 controller.go:288] nodeSync has been triggered
I0826 18:04:33.825843       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0826 18:04:33.828482       1 controller.go:779] Finished updateLoadBalancerHosts
I0826 18:04:33.829049       1 controller.go:720] It took 0.00310508 seconds to finish nodeSyncInternal
I0826 18:04:33.998218       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-z3rmsd-control-plane-799m2 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-08-26 18:04:18 +0000 UTC,LastTransitionTime:2021-08-26 18:03:35 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 18:04:29 +0000 UTC,LastTransitionTime:2021-08-26 18:04:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0826 18:04:33.999511       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-control-plane-799m2 ReadyCondition updated. Updating timestamp.
I0826 18:04:33.999648       1 node_lifecycle_controller.go:893] Node capz-z3rmsd-control-plane-799m2 is healthy again, removing all taints
I0826 18:04:33.999747       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0826 18:04:38.996856       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:04:38.997121       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:04:39.495575       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="59.298µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41208" resp=200
... skipping 213 lines ...
I0826 18:05:45.742414       1 controller.go:269] Triggering nodeSync
I0826 18:05:45.743386       1 controller.go:288] nodeSync has been triggered
I0826 18:05:45.743487       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0826 18:05:45.743579       1 controller.go:779] Finished updateLoadBalancerHosts
I0826 18:05:45.743664       1 controller.go:720] It took 0.000177998 seconds to finish nodeSyncInternal
I0826 18:05:45.743751       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
W0826 18:05:45.743845       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-z3rmsd-md-0-58bbv" does not exist
I0826 18:05:45.744823       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213d34f696667, ext:141117320648, loc:(*time.Location)(0x7505dc0)}}
I0826 18:05:45.745398       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213de6c6dceab, ext:185604148748, loc:(*time.Location)(0x7505dc0)}}
I0826 18:05:45.745501       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-z3rmsd-md-0-58bbv], creating 1
I0826 18:05:45.803916       1 controller_utils.go:581] Controller kube-proxy created pod kube-proxy-tb7mw
I0826 18:05:45.804110       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0826 18:05:45.804203       1 controller_utils.go:195] Controller still waiting on expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04213de6be15fc3, ext:185594945316, loc:(*time.Location)(0x7505dc0)}}
... skipping 210 lines ...
I0826 18:06:05.576924       1 controller.go:269] Triggering nodeSync
I0826 18:06:05.577017       1 controller.go:288] nodeSync has been triggered
I0826 18:06:05.577220       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0826 18:06:05.577315       1 controller.go:779] Finished updateLoadBalancerHosts
I0826 18:06:05.577483       1 controller.go:720] It took 0.000278798 seconds to finish nodeSyncInternal
I0826 18:06:05.576537       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
W0826 18:06:05.577712       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-z3rmsd-md-0-sq4fr" does not exist
I0826 18:06:05.577941       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213e28eb058c6, ext:202105192899, loc:(*time.Location)(0x7505dc0)}}
I0826 18:06:05.578249       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213e3627753c4, ext:205437000385, loc:(*time.Location)(0x7505dc0)}}
I0826 18:06:05.578685       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-z3rmsd-md-0-sq4fr], creating 1
I0826 18:06:05.618643       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/kube-proxy-m5ftz"
I0826 18:06:05.618694       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-m5ftz" podUID=fc278556-8f13-4506-9cae-fe740c389337
I0826 18:06:05.618742       1 daemon_controller.go:513] Pod kube-proxy-m5ftz added.
... skipping 248 lines ...
I0826 18:06:28.703347       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213e929eb0079, ext:228562021850, loc:(*time.Location)(0x7505dc0)}}
I0826 18:06:28.703429       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213e929ed67b3, ext:228562179248, loc:(*time.Location)(0x7505dc0)}}
I0826 18:06:28.703445       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:06:28.703488       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:06:28.703531       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:06:28.703590       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.078184ms)
I0826 18:06:29.252651       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-z3rmsd-md-0-58bbv transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-08-26 18:06:16 +0000 UTC,LastTransitionTime:2021-08-26 18:05:45 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 18:06:26 +0000 UTC,LastTransitionTime:2021-08-26 18:06:26 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0826 18:06:29.252741       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-58bbv ReadyCondition updated. Updating timestamp.
I0826 18:06:29.264766       1 node_lifecycle_controller.go:893] Node capz-z3rmsd-md-0-58bbv is healthy again, removing all taints
I0826 18:06:29.264878       1 node_lifecycle_controller.go:1214] Controller detected that zone francecentral::0 is now in state Normal.
I0826 18:06:29.266770       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-z3rmsd-md-0-58bbv}
I0826 18:06:29.266807       1 taint_manager.go:440] "Updating known taints on node" node="capz-z3rmsd-md-0-58bbv" taints=[]
I0826 18:06:29.266829       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-z3rmsd-md-0-58bbv"
... skipping 109 lines ...
I0826 18:06:49.184947       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04213ee4b05fcca, ext:249043696683, loc:(*time.Location)(0x7505dc0)}}
I0826 18:06:49.184965       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0826 18:06:49.185018       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0826 18:06:49.185051       1 daemon_controller.go:1102] Updating daemon set status
I0826 18:06:49.185133       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.376664ms)
I0826 18:06:49.268880       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-58bbv ReadyCondition updated. Updating timestamp.
I0826 18:06:49.269104       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-z3rmsd-md-0-sq4fr transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-08-26 18:06:35 +0000 UTC,LastTransitionTime:2021-08-26 18:06:05 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-08-26 18:06:46 +0000 UTC,LastTransitionTime:2021-08-26 18:06:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0826 18:06:49.269191       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-sq4fr ReadyCondition updated. Updating timestamp.
I0826 18:06:49.286097       1 node_lifecycle_controller.go:893] Node capz-z3rmsd-md-0-sq4fr is healthy again, removing all taints
I0826 18:06:49.287318       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-z3rmsd-md-0-sq4fr}
I0826 18:06:49.287347       1 taint_manager.go:440] "Updating known taints on node" node="capz-z3rmsd-md-0-sq4fr" taints=[]
I0826 18:06:49.287375       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-z3rmsd-md-0-sq4fr"
I0826 18:06:49.287887       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
... skipping 161 lines ...
I0826 18:08:05.156647       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-ksvvh, uid 8c48065b-2b84-41a8-b512-024a734d1295, event type delete
I0826 18:08:05.226606       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (3.8µs)
I0826 18:08:05.228461       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-8081, estimate: 0, errors: <nil>
I0826 18:08:05.244468       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-8081" (234.632805ms)
I0826 18:08:05.833277       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1318
I0826 18:08:05.856592       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1318, name default-token-2kt7z, uid 2277821f-e242-49e3-a253-0eaef6a5cdae, event type delete
E0826 18:08:05.896566       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1318/default: secrets "default-token-nn5cd" is forbidden: unable to create new content in namespace azuredisk-1318 because it is being terminated
I0826 18:08:05.940280       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1318/default), service account deleted, removing tokens
I0826 18:08:05.940370       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1318, name default, uid 83d9f67d-d3b7-4273-a0c8-0ec0ee05c1a4, event type delete
I0826 18:08:05.940434       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1318" (2µs)
I0826 18:08:05.958652       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1318, name kube-root-ca.crt, uid 4a7d7b82-62a7-4a69-a57c-6218a4b684c8, event type delete
I0826 18:08:05.962937       1 publisher.go:186] Finished syncing namespace "azuredisk-1318" (4.082271ms)
I0826 18:08:06.076475       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1318, estimate: 0, errors: <nil>
I0826 18:08:06.077073       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1318" (3.1µs)
I0826 18:08:06.088790       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1318" (260.025022ms)
I0826 18:08:06.656115       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-694
I0826 18:08:06.722970       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-694, name default-token-h79pn, uid 5aa9429a-d535-4368-bf7f-ee603c1c39d8, event type delete
E0826 18:08:06.745148       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-694/default: secrets "default-token-c87f9" is forbidden: unable to create new content in namespace azuredisk-694 because it is being terminated
I0826 18:08:06.804763       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-694, name default, uid e4cba4bd-4560-4de4-9413-3cf9fc73651a, event type delete
I0826 18:08:06.804824       1 tokens_controller.go:252] syncServiceAccount(azuredisk-694/default), service account deleted, removing tokens
I0826 18:08:06.806061       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-694" (123.4µs)
I0826 18:08:06.828172       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-694, name kube-root-ca.crt, uid 7738fca2-431a-498a-aa7f-2682a9e4310b, event type delete
I0826 18:08:06.833748       1 publisher.go:186] Finished syncing namespace "azuredisk-694" (5.388262ms)
I0826 18:08:06.850821       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-694" (1.9µs)
... skipping 31 lines ...
I0826 18:08:07.357580       1 pv_controller.go:1763] operation "provision-azuredisk-5356/pvc-rkfpg[b59aa6f4-880d-430d-8acd-4c1b6def79c1]" is already running, skipping
I0826 18:08:07.359558       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 StorageAccountType:Standard_LRS Size:10
I0826 18:08:07.467232       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3274
I0826 18:08:07.553734       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3274, name kube-root-ca.crt, uid 1614f83e-cc09-432e-ae04-d65bd0d59d51, event type delete
I0826 18:08:07.557114       1 publisher.go:186] Finished syncing namespace "azuredisk-3274" (3.253281ms)
I0826 18:08:07.581130       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3274, name default-token-7x7ft, uid 7c4bba4e-b1ba-4ac8-ad41-f2b83594b481, event type delete
E0826 18:08:07.598585       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3274/default: secrets "default-token-5ch7c" is forbidden: unable to create new content in namespace azuredisk-3274 because it is being terminated
I0826 18:08:07.606550       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3274/default), service account deleted, removing tokens
I0826 18:08:07.606626       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3274, name default, uid c9695ec4-7f2d-43f7-8b15-b5b0c54c9999, event type delete
I0826 18:08:07.606666       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3274" (2.4µs)
I0826 18:08:07.658075       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3274" (2.9µs)
I0826 18:08:07.659659       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3274, estimate: 0, errors: <nil>
I0826 18:08:07.670958       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3274" (206.473213ms)
I0826 18:08:08.281777       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-495
I0826 18:08:08.328480       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-495, name default-token-fqqh5, uid aded81b9-ba6b-4f32-8284-db05016fa9ec, event type delete
E0826 18:08:08.344197       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-495/default: secrets "default-token-n7gq7" is forbidden: unable to create new content in namespace azuredisk-495 because it is being terminated
I0826 18:08:08.413293       1 tokens_controller.go:252] syncServiceAccount(azuredisk-495/default), service account deleted, removing tokens
I0826 18:08:08.413380       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-495, name default, uid 67e2e3e1-0833-4954-8944-e56451c198c3, event type delete
I0826 18:08:08.413405       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-495" (2.6µs)
I0826 18:08:08.438310       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-495, name kube-root-ca.crt, uid c9b83089-5fd2-436f-abac-fbd8b98d8531, event type delete
I0826 18:08:08.440795       1 publisher.go:186] Finished syncing namespace "azuredisk-495" (2.442786ms)
I0826 18:08:08.458755       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-495" (2.6µs)
... skipping 229 lines ...
I0826 18:08:33.529368       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: claim azuredisk-5356/pvc-rkfpg not found
I0826 18:08:33.529377       1 pv_controller.go:1108] reclaimVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: policy is Delete
I0826 18:08:33.529388       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]]
I0826 18:08:33.529395       1 pv_controller.go:1763] operation "delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]" is already running, skipping
I0826 18:08:33.531350       1 pv_controller.go:1340] isVolumeReleased[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is released
I0826 18:08:33.531370       1 pv_controller.go:1404] doDeleteVolume [pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]
I0826 18:08:33.556533       1 pv_controller.go:1259] deletion of volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:08:33.556562       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: set phase Failed
I0826 18:08:33.556613       1 pv_controller.go:858] updating PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: set phase Failed
I0826 18:08:33.561691       1 pv_protection_controller.go:205] Got event on PV pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1
I0826 18:08:33.561842       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" with version 1333
I0826 18:08:33.561936       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: phase: Failed, bound to: "azuredisk-5356/pvc-rkfpg (uid: b59aa6f4-880d-430d-8acd-4c1b6def79c1)", boundByController: true
I0826 18:08:33.562016       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is bound to claim azuredisk-5356/pvc-rkfpg
I0826 18:08:33.562071       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: claim azuredisk-5356/pvc-rkfpg not found
I0826 18:08:33.562092       1 pv_controller.go:1108] reclaimVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: policy is Delete
I0826 18:08:33.562120       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]]
I0826 18:08:33.562171       1 pv_controller.go:1763] operation "delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]" is already running, skipping
I0826 18:08:33.563165       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" with version 1333
I0826 18:08:33.563217       1 pv_controller.go:879] volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" entered phase "Failed"
I0826 18:08:33.563236       1 pv_controller.go:901] volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
E0826 18:08:33.563349       1 goroutinemap.go:150] Operation for "delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]" failed. No retries permitted until 2021-08-26 18:08:34.063308259 +0000 UTC m=+353.922063072 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:08:33.563415       1 event.go:291] "Event occurred" object="pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
I0826 18:08:33.859496       1 gc_controller.go:161] GC'ing orphaned
I0826 18:08:33.859531       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:08:35.222501       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 0 items received
I0826 18:08:37.347484       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
I0826 18:08:37.347788       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 to the node "capz-z3rmsd-md-0-58bbv" mounted false
... skipping 5 lines ...
I0826 18:08:37.491838       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1 from node "capz-z3rmsd-md-0-58bbv"
I0826 18:08:37.491957       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1"
I0826 18:08:37.492012       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1)
I0826 18:08:39.188024       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:08:39.196171       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:08:39.196249       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" with version 1333
I0826 18:08:39.196318       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: phase: Failed, bound to: "azuredisk-5356/pvc-rkfpg (uid: b59aa6f4-880d-430d-8acd-4c1b6def79c1)", boundByController: true
I0826 18:08:39.196358       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is bound to claim azuredisk-5356/pvc-rkfpg
I0826 18:08:39.196381       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: claim azuredisk-5356/pvc-rkfpg not found
I0826 18:08:39.196394       1 pv_controller.go:1108] reclaimVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: policy is Delete
I0826 18:08:39.196411       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]]
I0826 18:08:39.196461       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1] started
I0826 18:08:39.203491       1 pv_controller.go:1340] isVolumeReleased[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is released
I0826 18:08:39.203510       1 pv_controller.go:1404] doDeleteVolume [pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]
I0826 18:08:39.203547       1 pv_controller.go:1259] deletion of volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1) since it's in attaching or detaching state
I0826 18:08:39.203693       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: set phase Failed
I0826 18:08:39.203712       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: phase Failed already set
E0826 18:08:39.203746       1 goroutinemap.go:150] Operation for "delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]" failed. No retries permitted until 2021-08-26 18:08:40.203722851 +0000 UTC m=+360.062477964 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1) since it's in attaching or detaching state
I0826 18:08:39.302701       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-58bbv ReadyCondition updated. Updating timestamp.
I0826 18:08:39.488735       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="109.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:43528" resp=200
I0826 18:08:41.173460       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 0 items received
I0826 18:08:44.769201       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicaSet total 16 items received
I0826 18:08:44.782218       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0826 18:08:47.371571       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
... skipping 8 lines ...
I0826 18:08:53.860299       1 gc_controller.go:161] GC'ing orphaned
I0826 18:08:53.860368       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:08:54.183332       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:08:54.188475       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:08:54.196672       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:08:54.196747       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" with version 1333
I0826 18:08:54.196775       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: phase: Failed, bound to: "azuredisk-5356/pvc-rkfpg (uid: b59aa6f4-880d-430d-8acd-4c1b6def79c1)", boundByController: true
I0826 18:08:54.196801       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is bound to claim azuredisk-5356/pvc-rkfpg
I0826 18:08:54.196813       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: claim azuredisk-5356/pvc-rkfpg not found
I0826 18:08:54.196819       1 pv_controller.go:1108] reclaimVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: policy is Delete
I0826 18:08:54.196830       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]]
I0826 18:08:54.196863       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1] started
I0826 18:08:54.200829       1 pv_controller.go:1340] isVolumeReleased[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is released
... skipping 2 lines ...
I0826 18:08:59.377503       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1
I0826 18:08:59.377546       1 pv_controller.go:1435] volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" deleted
I0826 18:08:59.377727       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: success
I0826 18:08:59.384988       1 pv_protection_controller.go:205] Got event on PV pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1
I0826 18:08:59.385036       1 pv_protection_controller.go:125] Processing PV pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1
I0826 18:08:59.385444       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" with version 1373
I0826 18:08:59.385477       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: phase: Failed, bound to: "azuredisk-5356/pvc-rkfpg (uid: b59aa6f4-880d-430d-8acd-4c1b6def79c1)", boundByController: true
I0826 18:08:59.385505       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: volume is bound to claim azuredisk-5356/pvc-rkfpg
I0826 18:08:59.385526       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: claim azuredisk-5356/pvc-rkfpg not found
I0826 18:08:59.385536       1 pv_controller.go:1108] reclaimVolume[pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1]: policy is Delete
I0826 18:08:59.385552       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1[7f19ae77-1626-4de9-a659-495ea930cb65]]
I0826 18:08:59.385573       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1] started
I0826 18:08:59.389904       1 pv_controller.go:1243] Volume "pvc-b59aa6f4-880d-430d-8acd-4c1b6def79c1" is already being deleted
... skipping 150 lines ...
I0826 18:09:14.429973       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1957/pvc-8wwzr] status: phase Bound already set
I0826 18:09:14.429984       1 pv_controller.go:1038] volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" bound to claim "azuredisk-1957/pvc-8wwzr"
I0826 18:09:14.430002       1 pv_controller.go:1039] volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" status after binding: phase: Bound, bound to: "azuredisk-1957/pvc-8wwzr (uid: 361e3749-11a7-4fef-821c-07d1ccf656fa)", boundByController: true
I0826 18:09:14.430016       1 pv_controller.go:1040] claim "azuredisk-1957/pvc-8wwzr" status after binding: phase: Bound, bound to: "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa", bindCompleted: true, boundByController: true
I0826 18:09:14.689111       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4147
I0826 18:09:14.747202       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4147, name default-token-kv6mh, uid 2ec2a66d-48a2-4307-baf6-4457fe8ac6ce, event type delete
E0826 18:09:14.761349       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4147/default: secrets "default-token-n42vg" is forbidden: unable to create new content in namespace azuredisk-4147 because it is being terminated
I0826 18:09:14.835452       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4147, name kube-root-ca.crt, uid f1395889-818c-4009-9c8a-d713040ab3d7, event type delete
I0826 18:09:14.837588       1 publisher.go:186] Finished syncing namespace "azuredisk-4147" (2.064877ms)
I0826 18:09:14.888434       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4147/default), service account deleted, removing tokens
I0826 18:09:14.888507       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4147, name default, uid 13467f3f-0fd1-4d7d-afe7-071228b98bb7, event type delete
I0826 18:09:14.888549       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4147" (1.9µs)
I0826 18:09:14.914515       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4147, estimate: 0, errors: <nil>
... skipping 363 lines ...
I0826 18:11:19.377053       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: claim azuredisk-1957/pvc-8wwzr not found
I0826 18:11:19.377064       1 pv_controller.go:1108] reclaimVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: policy is Delete
I0826 18:11:19.377100       1 pv_controller.go:1752] scheduleOperation[delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]]
I0826 18:11:19.377112       1 pv_controller.go:1763] operation "delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]" is already running, skipping
I0826 18:11:19.379531       1 pv_controller.go:1340] isVolumeReleased[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is released
I0826 18:11:19.379552       1 pv_controller.go:1404] doDeleteVolume [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]
I0826 18:11:19.403270       1 pv_controller.go:1259] deletion of volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:11:19.403303       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: set phase Failed
I0826 18:11:19.403314       1 pv_controller.go:858] updating PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: set phase Failed
I0826 18:11:19.408949       1 pv_protection_controller.go:205] Got event on PV pvc-361e3749-11a7-4fef-821c-07d1ccf656fa
I0826 18:11:19.408998       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" with version 1647
I0826 18:11:19.409035       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase: Failed, bound to: "azuredisk-1957/pvc-8wwzr (uid: 361e3749-11a7-4fef-821c-07d1ccf656fa)", boundByController: true
I0826 18:11:19.409071       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is bound to claim azuredisk-1957/pvc-8wwzr
I0826 18:11:19.409096       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: claim azuredisk-1957/pvc-8wwzr not found
I0826 18:11:19.409108       1 pv_controller.go:1108] reclaimVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: policy is Delete
I0826 18:11:19.409127       1 pv_controller.go:1752] scheduleOperation[delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]]
I0826 18:11:19.409136       1 pv_controller.go:1763] operation "delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]" is already running, skipping
I0826 18:11:19.409406       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" with version 1647
I0826 18:11:19.409434       1 pv_controller.go:879] volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" entered phase "Failed"
I0826 18:11:19.409446       1 pv_controller.go:901] volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
E0826 18:11:19.409495       1 goroutinemap.go:150] Operation for "delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]" failed. No retries permitted until 2021-08-26 18:11:19.909471071 +0000 UTC m=+519.768225884 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:11:19.409832       1 event.go:291] "Event occurred" object="pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
I0826 18:11:19.488043       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="142.798µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45060" resp=200
I0826 18:11:24.186737       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:11:24.195017       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:11:24.204593       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:11:24.204653       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" with version 1647
I0826 18:11:24.204701       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase: Failed, bound to: "azuredisk-1957/pvc-8wwzr (uid: 361e3749-11a7-4fef-821c-07d1ccf656fa)", boundByController: true
I0826 18:11:24.204740       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is bound to claim azuredisk-1957/pvc-8wwzr
I0826 18:11:24.204779       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: claim azuredisk-1957/pvc-8wwzr not found
I0826 18:11:24.204807       1 pv_controller.go:1108] reclaimVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: policy is Delete
I0826 18:11:24.204840       1 pv_controller.go:1752] scheduleOperation[delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]]
I0826 18:11:24.204879       1 pv_controller.go:1231] deleteVolumeOperation [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa] started
I0826 18:11:24.220969       1 pv_controller.go:1340] isVolumeReleased[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is released
I0826 18:11:24.221005       1 pv_controller.go:1404] doDeleteVolume [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]
I0826 18:11:24.251259       1 pv_controller.go:1259] deletion of volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:11:24.251289       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: set phase Failed
I0826 18:11:24.251301       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase Failed already set
E0826 18:11:24.251349       1 goroutinemap.go:150] Operation for "delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]" failed. No retries permitted until 2021-08-26 18:11:25.251310992 +0000 UTC m=+525.110065805 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:11:25.154183       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0826 18:11:25.878235       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 21 items received
I0826 18:11:27.482967       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
I0826 18:11:27.483101       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:11:27.543824       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-z3rmsd-md-0-58bbv" succeeded. VolumesAttached: []
I0826 18:11:27.544415       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa") on node "capz-z3rmsd-md-0-58bbv" 
... skipping 10 lines ...
I0826 18:11:33.864198       1 gc_controller.go:161] GC'ing orphaned
I0826 18:11:33.864224       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:11:38.321449       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0826 18:11:39.195402       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:11:39.205631       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:11:39.205746       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" with version 1647
I0826 18:11:39.205796       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase: Failed, bound to: "azuredisk-1957/pvc-8wwzr (uid: 361e3749-11a7-4fef-821c-07d1ccf656fa)", boundByController: true
I0826 18:11:39.205837       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is bound to claim azuredisk-1957/pvc-8wwzr
I0826 18:11:39.205862       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: claim azuredisk-1957/pvc-8wwzr not found
I0826 18:11:39.205871       1 pv_controller.go:1108] reclaimVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: policy is Delete
I0826 18:11:39.205886       1 pv_controller.go:1752] scheduleOperation[delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]]
I0826 18:11:39.205920       1 pv_controller.go:1231] deleteVolumeOperation [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa] started
I0826 18:11:39.212336       1 pv_controller.go:1340] isVolumeReleased[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is released
I0826 18:11:39.212355       1 pv_controller.go:1404] doDeleteVolume [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]
I0826 18:11:39.212394       1 pv_controller.go:1259] deletion of volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) since it's in attaching or detaching state
I0826 18:11:39.212409       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: set phase Failed
I0826 18:11:39.212420       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase Failed already set
E0826 18:11:39.212451       1 goroutinemap.go:150] Operation for "delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]" failed. No retries permitted until 2021-08-26 18:11:41.212429057 +0000 UTC m=+541.071183970 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) since it's in attaching or detaching state
I0826 18:11:39.488263       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="231.097µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45268" resp=200
I0826 18:11:42.784924       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0826 18:11:43.194420       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) returned with <nil>
I0826 18:11:43.194479       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa) succeeded
I0826 18:11:43.194493       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa was detached from node:capz-z3rmsd-md-0-58bbv
I0826 18:11:43.194522       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa") on node "capz-z3rmsd-md-0-58bbv" 
... skipping 2 lines ...
I0826 18:11:53.864729       1 gc_controller.go:161] GC'ing orphaned
I0826 18:11:53.864786       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:11:54.187443       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:11:54.195524       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:11:54.206823       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:11:54.207042       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" with version 1647
I0826 18:11:54.207110       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase: Failed, bound to: "azuredisk-1957/pvc-8wwzr (uid: 361e3749-11a7-4fef-821c-07d1ccf656fa)", boundByController: true
I0826 18:11:54.207169       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is bound to claim azuredisk-1957/pvc-8wwzr
I0826 18:11:54.207216       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: claim azuredisk-1957/pvc-8wwzr not found
I0826 18:11:54.207229       1 pv_controller.go:1108] reclaimVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: policy is Delete
I0826 18:11:54.207248       1 pv_controller.go:1752] scheduleOperation[delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]]
I0826 18:11:54.207397       1 pv_controller.go:1231] deleteVolumeOperation [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa] started
I0826 18:11:54.214238       1 pv_controller.go:1340] isVolumeReleased[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is released
... skipping 7 lines ...
I0826 18:11:59.418755       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa
I0826 18:11:59.418792       1 pv_controller.go:1435] volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" deleted
I0826 18:11:59.418807       1 pv_controller.go:1283] deleteVolumeOperation [pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: success
I0826 18:11:59.427664       1 pv_protection_controller.go:205] Got event on PV pvc-361e3749-11a7-4fef-821c-07d1ccf656fa
I0826 18:11:59.427697       1 pv_protection_controller.go:125] Processing PV pvc-361e3749-11a7-4fef-821c-07d1ccf656fa
I0826 18:11:59.427873       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-361e3749-11a7-4fef-821c-07d1ccf656fa" with version 1707
I0826 18:11:59.427936       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: phase: Failed, bound to: "azuredisk-1957/pvc-8wwzr (uid: 361e3749-11a7-4fef-821c-07d1ccf656fa)", boundByController: true
I0826 18:11:59.427988       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: volume is bound to claim azuredisk-1957/pvc-8wwzr
I0826 18:11:59.428010       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: claim azuredisk-1957/pvc-8wwzr not found
I0826 18:11:59.428022       1 pv_controller.go:1108] reclaimVolume[pvc-361e3749-11a7-4fef-821c-07d1ccf656fa]: policy is Delete
I0826 18:11:59.428063       1 pv_controller.go:1752] scheduleOperation[delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]]
I0826 18:11:59.428073       1 pv_controller.go:1763] operation "delete-pvc-361e3749-11a7-4fef-821c-07d1ccf656fa[383b95fc-d276-4625-80c4-7a42dc919145]" is already running, skipping
I0826 18:11:59.451179       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-361e3749-11a7-4fef-821c-07d1ccf656fa
... skipping 149 lines ...
I0826 18:12:09.210585       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8705/pvc-dqgfv]: already bound to "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e"
I0826 18:12:09.210596       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8705/pvc-dqgfv] status: set phase Bound
I0826 18:12:09.210660       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8705/pvc-dqgfv] status: phase Bound already set
I0826 18:12:09.210678       1 pv_controller.go:1038] volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" bound to claim "azuredisk-8705/pvc-dqgfv"
I0826 18:12:09.210698       1 pv_controller.go:1039] volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" status after binding: phase: Bound, bound to: "azuredisk-8705/pvc-dqgfv (uid: b68b7a98-fc1b-49d6-af98-59a68e8d818e)", boundByController: true
I0826 18:12:09.210738       1 pv_controller.go:1040] claim "azuredisk-8705/pvc-dqgfv" status after binding: phase: Bound, bound to: "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e", bindCompleted: true, boundByController: true
E0826 18:12:09.212252       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1957/default: secrets "default-token-mgwqf" is forbidden: unable to create new content in namespace azuredisk-1957 because it is being terminated
I0826 18:12:09.216143       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1957/default), service account deleted, removing tokens
I0826 18:12:09.216881       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1957, name default, uid 5ab1e499-e8eb-4287-b54f-5510af7a5b29, event type delete
I0826 18:12:09.217017       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1957" (2.4µs)
I0826 18:12:09.263072       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1957, name kube-root-ca.crt, uid 241dc46b-69e4-4005-846e-b73f2768ac2b, event type delete
I0826 18:12:09.266455       1 publisher.go:186] Finished syncing namespace "azuredisk-1957" (3.328563ms)
I0826 18:12:09.333551       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1957" (3µs)
... skipping 139 lines ...
I0826 18:12:30.485376       1 pv_controller.go:1108] reclaimVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: policy is Delete
I0826 18:12:30.485387       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]]
I0826 18:12:30.485394       1 pv_controller.go:1763] operation "delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]" is already running, skipping
I0826 18:12:30.485420       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e] started
I0826 18:12:30.490489       1 pv_controller.go:1340] isVolumeReleased[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is released
I0826 18:12:30.490529       1 pv_controller.go:1404] doDeleteVolume [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]
I0826 18:12:30.514108       1 pv_controller.go:1259] deletion of volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:12:30.514135       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: set phase Failed
I0826 18:12:30.514147       1 pv_controller.go:858] updating PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: set phase Failed
I0826 18:12:30.519070       1 pv_protection_controller.go:205] Got event on PV pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e
I0826 18:12:30.519139       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" with version 1816
I0826 18:12:30.519499       1 pv_controller.go:879] volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" entered phase "Failed"
I0826 18:12:30.519521       1 pv_controller.go:901] volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
E0826 18:12:30.519696       1 goroutinemap.go:150] Operation for "delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]" failed. No retries permitted until 2021-08-26 18:12:31.019646548 +0000 UTC m=+590.878401461 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:12:30.519221       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" with version 1816
I0826 18:12:30.520275       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: phase: Failed, bound to: "azuredisk-8705/pvc-dqgfv (uid: b68b7a98-fc1b-49d6-af98-59a68e8d818e)", boundByController: true
I0826 18:12:30.520393       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is bound to claim azuredisk-8705/pvc-dqgfv
I0826 18:12:30.520483       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: claim azuredisk-8705/pvc-dqgfv not found
I0826 18:12:30.520649       1 pv_controller.go:1108] reclaimVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: policy is Delete
I0826 18:12:30.520762       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]]
I0826 18:12:30.520877       1 pv_controller.go:1765] operation "delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]" postponed due to exponential backoff
I0826 18:12:30.521022       1 event.go:291] "Event occurred" object="pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
... skipping 12 lines ...
I0826 18:12:37.729715       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e from node "capz-z3rmsd-md-0-58bbv"
I0826 18:12:37.729760       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e"
I0826 18:12:37.729846       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e)
I0826 18:12:39.196881       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:12:39.212145       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:12:39.212199       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" with version 1816
I0826 18:12:39.212239       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: phase: Failed, bound to: "azuredisk-8705/pvc-dqgfv (uid: b68b7a98-fc1b-49d6-af98-59a68e8d818e)", boundByController: true
I0826 18:12:39.212284       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is bound to claim azuredisk-8705/pvc-dqgfv
I0826 18:12:39.212302       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: claim azuredisk-8705/pvc-dqgfv not found
I0826 18:12:39.212313       1 pv_controller.go:1108] reclaimVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: policy is Delete
I0826 18:12:39.212327       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]]
I0826 18:12:39.212374       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e] started
I0826 18:12:39.222046       1 pv_controller.go:1340] isVolumeReleased[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is released
I0826 18:12:39.222067       1 pv_controller.go:1404] doDeleteVolume [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]
I0826 18:12:39.222219       1 pv_controller.go:1259] deletion of volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e) since it's in attaching or detaching state
I0826 18:12:39.222240       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: set phase Failed
I0826 18:12:39.222251       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: phase Failed already set
E0826 18:12:39.222337       1 goroutinemap.go:150] Operation for "delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]" failed. No retries permitted until 2021-08-26 18:12:40.222258726 +0000 UTC m=+600.081013639 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e) since it's in attaching or detaching state
I0826 18:12:39.349265       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-58bbv ReadyCondition updated. Updating timestamp.
I0826 18:12:39.487617       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="62.699µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45848" resp=200
I0826 18:12:44.025478       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0826 18:12:45.816591       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 56 items received
I0826 18:12:49.488682       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="71.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45940" resp=200
I0826 18:12:51.764473       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
... skipping 12 lines ...
I0826 18:12:53.880538       1 controller.go:720] It took 3.62e-05 seconds to finish nodeSyncInternal
I0826 18:12:53.948546       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0826 18:12:54.188665       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:12:54.197800       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:12:54.213103       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:12:54.213160       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" with version 1816
I0826 18:12:54.213201       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: phase: Failed, bound to: "azuredisk-8705/pvc-dqgfv (uid: b68b7a98-fc1b-49d6-af98-59a68e8d818e)", boundByController: true
I0826 18:12:54.213238       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is bound to claim azuredisk-8705/pvc-dqgfv
I0826 18:12:54.213261       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: claim azuredisk-8705/pvc-dqgfv not found
I0826 18:12:54.213276       1 pv_controller.go:1108] reclaimVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: policy is Delete
I0826 18:12:54.213293       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]]
I0826 18:12:54.213323       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e] started
I0826 18:12:54.221459       1 pv_controller.go:1340] isVolumeReleased[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is released
I0826 18:12:54.221478       1 pv_controller.go:1404] doDeleteVolume [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]
I0826 18:12:59.000082       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0826 18:12:59.470386       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e
I0826 18:12:59.470514       1 pv_controller.go:1435] volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" deleted
I0826 18:12:59.470576       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: success
I0826 18:12:59.478535       1 pv_protection_controller.go:205] Got event on PV pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e
I0826 18:12:59.478778       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e" with version 1861
I0826 18:12:59.479026       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: phase: Failed, bound to: "azuredisk-8705/pvc-dqgfv (uid: b68b7a98-fc1b-49d6-af98-59a68e8d818e)", boundByController: true
I0826 18:12:59.479121       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: volume is bound to claim azuredisk-8705/pvc-dqgfv
I0826 18:12:59.479229       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: claim azuredisk-8705/pvc-dqgfv not found
I0826 18:12:59.479250       1 pv_controller.go:1108] reclaimVolume[pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e]: policy is Delete
I0826 18:12:59.479311       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e[f02949ad-2140-4016-a8c6-9faa09520310]]
I0826 18:12:59.479383       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e] started
I0826 18:12:59.479463       1 pv_protection_controller.go:125] Processing PV pvc-b68b7a98-fc1b-49d6-af98-59a68e8d818e
... skipping 273 lines ...
I0826 18:13:30.547515       1 pv_controller.go:1108] reclaimVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: policy is Delete
I0826 18:13:30.547528       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]]
I0826 18:13:30.547538       1 pv_controller.go:1763] operation "delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]" is already running, skipping
I0826 18:13:30.547618       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7] started
I0826 18:13:30.549614       1 pv_controller.go:1340] isVolumeReleased[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is released
I0826 18:13:30.549632       1 pv_controller.go:1404] doDeleteVolume [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]
I0826 18:13:30.575197       1 pv_controller.go:1259] deletion of volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:13:30.575352       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: set phase Failed
I0826 18:13:30.575445       1 pv_controller.go:858] updating PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: set phase Failed
I0826 18:13:30.581206       1 pv_protection_controller.go:205] Got event on PV pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7
I0826 18:13:30.581679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" with version 1963
I0826 18:13:30.582036       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: phase: Failed, bound to: "azuredisk-2451/pvc-v5v7b (uid: 9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7)", boundByController: true
I0826 18:13:30.582141       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is bound to claim azuredisk-2451/pvc-v5v7b
I0826 18:13:30.582315       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: claim azuredisk-2451/pvc-v5v7b not found
I0826 18:13:30.582398       1 pv_controller.go:1108] reclaimVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: policy is Delete
I0826 18:13:30.582453       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]]
I0826 18:13:30.582524       1 pv_controller.go:1763] operation "delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]" is already running, skipping
I0826 18:13:30.581917       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" with version 1963
I0826 18:13:30.582710       1 pv_controller.go:879] volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" entered phase "Failed"
I0826 18:13:30.582791       1 pv_controller.go:901] volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
E0826 18:13:30.582922       1 goroutinemap.go:150] Operation for "delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]" failed. No retries permitted until 2021-08-26 18:13:31.082829919 +0000 UTC m=+650.941584832 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:13:30.583045       1 event.go:291] "Event occurred" object="pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted"
I0826 18:13:33.867236       1 gc_controller.go:161] GC'ing orphaned
I0826 18:13:33.867278       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:13:36.574911       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
I0826 18:13:36.574945       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7 to the node "capz-z3rmsd-md-0-sq4fr" mounted false
I0826 18:13:36.596268       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-z3rmsd-md-0-sq4fr" succeeded. VolumesAttached: []
... skipping 6 lines ...
I0826 18:13:36.641632       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7)
I0826 18:13:37.000406       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2021-08-26 18:13:37.000341774 +0000 UTC m=+656.859096587"
I0826 18:13:37.001434       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.07509ms"
I0826 18:13:39.199062       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:13:39.215275       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:13:39.215398       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" with version 1963
I0826 18:13:39.215512       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: phase: Failed, bound to: "azuredisk-2451/pvc-v5v7b (uid: 9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7)", boundByController: true
I0826 18:13:39.215573       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is bound to claim azuredisk-2451/pvc-v5v7b
I0826 18:13:39.215600       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: claim azuredisk-2451/pvc-v5v7b not found
I0826 18:13:39.215610       1 pv_controller.go:1108] reclaimVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: policy is Delete
I0826 18:13:39.215716       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]]
I0826 18:13:39.215785       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7] started
I0826 18:13:39.224504       1 pv_controller.go:1340] isVolumeReleased[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is released
I0826 18:13:39.224529       1 pv_controller.go:1404] doDeleteVolume [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]
I0826 18:13:39.224569       1 pv_controller.go:1259] deletion of volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7) since it's in attaching or detaching state
I0826 18:13:39.224591       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: set phase Failed
I0826 18:13:39.224605       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: phase Failed already set
E0826 18:13:39.224641       1 goroutinemap.go:150] Operation for "delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]" failed. No retries permitted until 2021-08-26 18:13:40.224617278 +0000 UTC m=+660.083372091 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7) since it's in attaching or detaching state
I0826 18:13:39.359716       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-sq4fr ReadyCondition updated. Updating timestamp.
I0826 18:13:39.487780       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="77.599µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46430" resp=200
I0826 18:13:45.207260       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0826 18:13:49.487638       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="59.599µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46520" resp=200
I0826 18:13:51.000430       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-08-26 18:13:51.000367217 +0000 UTC m=+670.859122130"
I0826 18:13:51.000943       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="562.993µs"
... skipping 5 lines ...
I0826 18:13:53.868094       1 gc_controller.go:161] GC'ing orphaned
I0826 18:13:53.868130       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:13:54.190469       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:13:54.199706       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:13:54.215952       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:13:54.216028       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" with version 1963
I0826 18:13:54.216074       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: phase: Failed, bound to: "azuredisk-2451/pvc-v5v7b (uid: 9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7)", boundByController: true
I0826 18:13:54.216111       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is bound to claim azuredisk-2451/pvc-v5v7b
I0826 18:13:54.216134       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: claim azuredisk-2451/pvc-v5v7b not found
I0826 18:13:54.216160       1 pv_controller.go:1108] reclaimVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: policy is Delete
I0826 18:13:54.216193       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]]
I0826 18:13:54.216225       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7] started
I0826 18:13:54.221841       1 pv_controller.go:1340] isVolumeReleased[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is released
... skipping 2 lines ...
I0826 18:13:59.408708       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7
I0826 18:13:59.408745       1 pv_controller.go:1435] volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" deleted
I0826 18:13:59.408789       1 pv_controller.go:1283] deleteVolumeOperation [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: success
I0826 18:13:59.418304       1 pv_protection_controller.go:205] Got event on PV pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7
I0826 18:13:59.418336       1 pv_protection_controller.go:125] Processing PV pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7
I0826 18:13:59.418759       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" with version 2009
I0826 18:13:59.418850       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: phase: Failed, bound to: "azuredisk-2451/pvc-v5v7b (uid: 9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7)", boundByController: true
I0826 18:13:59.418939       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: volume is bound to claim azuredisk-2451/pvc-v5v7b
I0826 18:13:59.419056       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: claim azuredisk-2451/pvc-v5v7b not found
I0826 18:13:59.419077       1 pv_controller.go:1108] reclaimVolume[pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7]: policy is Delete
I0826 18:13:59.419339       1 pv_controller.go:1752] scheduleOperation[delete-pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7[1d135380-715a-4b88-bce7-0cf136ad1686]]
I0826 18:13:59.419479       1 pv_controller.go:1231] deleteVolumeOperation [pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7] started
I0826 18:13:59.423218       1 pv_controller.go:1243] Volume "pvc-9bfed805-4fa8-4ee5-aa09-f7cd533ca4f7" is already being deleted
... skipping 48 lines ...
I0826 18:14:06.964595       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2451, name azuredisk-volume-tester-8g8nw.169eedd3e0a762e3, uid db6a0aba-3203-4638-9a8b-4ed5aa0be75e, event type delete
I0826 18:14:06.969664       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2451, name pvc-v5v7b.169eedcdfbe167b8, uid b6f871bf-3546-4c2e-a5f2-ee603b3e651b, event type delete
I0826 18:14:06.973400       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2451, name pvc-v5v7b.169eedce9309cd67, uid 50d72d83-d249-4a04-bcc5-46f8810e73b8, event type delete
I0826 18:14:06.988404       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-2451, name kube-root-ca.crt, uid 29540190-2e9d-4dd5-87b3-74b9cd0a6d21, event type delete
I0826 18:14:06.993822       1 publisher.go:186] Finished syncing namespace "azuredisk-2451" (5.121953ms)
I0826 18:14:07.002681       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2451, name default-token-89jmj, uid d74b3548-7498-4bc4-bee7-9c3abd17ef30, event type delete
E0826 18:14:07.028851       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2451/default: secrets "default-token-zkn8v" is forbidden: unable to create new content in namespace azuredisk-2451 because it is being terminated
I0826 18:14:07.089599       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 StorageAccountType:Premium_LRS Size:10
I0826 18:14:07.093857       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2451/default), service account deleted, removing tokens
I0826 18:14:07.093907       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2451, name default, uid 42a61b10-d8b8-43cd-a5bf-17fdde4838ca, event type delete
I0826 18:14:07.093941       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2451" (1.9µs)
I0826 18:14:07.110076       1 azure_managedDiskController.go:380] Azure disk "capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573" is not zoned
I0826 18:14:07.110645       1 pv_controller.go:1598] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" for claim "azuredisk-9828/pvc-cg64p" created
... skipping 1067 lines ...
I0826 18:17:47.883056       1 pv_controller.go:1108] reclaimVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: policy is Delete
I0826 18:17:47.883068       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]]
I0826 18:17:47.883082       1 pv_controller.go:1763] operation "delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]" is already running, skipping
I0826 18:17:47.883110       1 pv_controller.go:1231] deleteVolumeOperation [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4] started
I0826 18:17:47.884921       1 pv_controller.go:1340] isVolumeReleased[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is released
I0826 18:17:47.884960       1 pv_controller.go:1404] doDeleteVolume [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]
I0826 18:17:47.908373       1 pv_controller.go:1259] deletion of volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:17:47.908396       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: set phase Failed
I0826 18:17:47.908406       1 pv_controller.go:858] updating PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: set phase Failed
I0826 18:17:47.915395       1 pv_protection_controller.go:205] Got event on PV pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4
I0826 18:17:47.915449       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" with version 2435
I0826 18:17:47.915477       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase: Failed, bound to: "azuredisk-9828/pvc-fnm8w (uid: 41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4)", boundByController: true
I0826 18:17:47.915502       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is bound to claim azuredisk-9828/pvc-fnm8w
I0826 18:17:47.915524       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: claim azuredisk-9828/pvc-fnm8w not found
I0826 18:17:47.915532       1 pv_controller.go:1108] reclaimVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: policy is Delete
I0826 18:17:47.915544       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]]
I0826 18:17:47.915551       1 pv_controller.go:1763] operation "delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]" is already running, skipping
I0826 18:17:47.916568       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" with version 2435
I0826 18:17:47.916594       1 pv_controller.go:879] volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" entered phase "Failed"
I0826 18:17:47.916609       1 pv_controller.go:901] volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
E0826 18:17:47.916656       1 goroutinemap.go:150] Operation for "delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]" failed. No retries permitted until 2021-08-26 18:17:48.416637074 +0000 UTC m=+908.275391887 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:17:47.916877       1 event.go:291] "Event occurred" object="pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted"
I0826 18:17:47.943422       1 actual_state_of_world.go:427] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 on node "capz-z3rmsd-md-0-sq4fr"
I0826 18:17:49.488175       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="72.099µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48846" resp=200
I0826 18:17:50.091491       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0826 18:17:50.324790       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0826 18:17:53.748470       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 53 lines ...
I0826 18:17:54.240039       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:17:54.240056       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c found: phase: Bound, bound to: "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2", bindCompleted: true, boundByController: true
I0826 18:17:54.240169       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: all is bound
I0826 18:17:54.240187       1 pv_controller.go:858] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: set phase Bound
I0826 18:17:54.240197       1 pv_controller.go:861] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase Bound already set
I0826 18:17:54.240290       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" with version 2435
I0826 18:17:54.240362       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase: Failed, bound to: "azuredisk-9828/pvc-fnm8w (uid: 41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4)", boundByController: true
I0826 18:17:54.240555       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is bound to claim azuredisk-9828/pvc-fnm8w
I0826 18:17:54.240642       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: claim azuredisk-9828/pvc-fnm8w not found
I0826 18:17:54.240655       1 pv_controller.go:1108] reclaimVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: policy is Delete
I0826 18:17:54.240711       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]]
I0826 18:17:54.240788       1 pv_controller.go:1231] deleteVolumeOperation [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4] started
I0826 18:17:54.253415       1 pv_controller.go:1340] isVolumeReleased[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is released
I0826 18:17:54.253435       1 pv_controller.go:1404] doDeleteVolume [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]
I0826 18:17:54.276737       1 pv_controller.go:1259] deletion of volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:17:54.276765       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: set phase Failed
I0826 18:17:54.276777       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase Failed already set
E0826 18:17:54.276810       1 goroutinemap.go:150] Operation for "delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]" failed. No retries permitted until 2021-08-26 18:17:55.276786283 +0000 UTC m=+915.135541196 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:17:56.861018       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
I0826 18:17:56.861066       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 to the node "capz-z3rmsd-md-0-sq4fr" mounted true
I0826 18:17:56.861080       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 to the node "capz-z3rmsd-md-0-sq4fr" mounted false
I0826 18:17:56.941709       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573\"}]}}" for node "capz-z3rmsd-md-0-sq4fr" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 0}]
I0826 18:17:56.941992       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4") on node "capz-z3rmsd-md-0-sq4fr" 
I0826 18:17:56.947725       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
... skipping 16 lines ...
I0826 18:18:09.239810       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:18:09.240019       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c found: phase: Bound, bound to: "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2", bindCompleted: true, boundByController: true
I0826 18:18:09.240217       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: all is bound
I0826 18:18:09.240380       1 pv_controller.go:858] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: set phase Bound
I0826 18:18:09.240577       1 pv_controller.go:861] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase Bound already set
I0826 18:18:09.240747       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" with version 2435
I0826 18:18:09.241005       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase: Failed, bound to: "azuredisk-9828/pvc-fnm8w (uid: 41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4)", boundByController: true
I0826 18:18:09.241192       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is bound to claim azuredisk-9828/pvc-fnm8w
I0826 18:18:09.241393       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: claim azuredisk-9828/pvc-fnm8w not found
I0826 18:18:09.241607       1 pv_controller.go:1108] reclaimVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: policy is Delete
I0826 18:18:09.241774       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]]
I0826 18:18:09.241960       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" with version 2049
I0826 18:18:09.242156       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase: Bound, bound to: "azuredisk-9828/pvc-cg64p (uid: eac92eb7-a833-4621-934d-e781bb0d6573)", boundByController: true
... skipping 34 lines ...
I0826 18:18:09.248731       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9828/pvc-sjc5c] status: phase Bound already set
I0826 18:18:09.248744       1 pv_controller.go:1038] volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" bound to claim "azuredisk-9828/pvc-sjc5c"
I0826 18:18:09.248761       1 pv_controller.go:1039] volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" status after binding: phase: Bound, bound to: "azuredisk-9828/pvc-sjc5c (uid: a74f28b0-80c4-4070-9e13-7a907ffce6b2)", boundByController: true
I0826 18:18:09.248777       1 pv_controller.go:1040] claim "azuredisk-9828/pvc-sjc5c" status after binding: phase: Bound, bound to: "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2", bindCompleted: true, boundByController: true
I0826 18:18:09.253863       1 pv_controller.go:1340] isVolumeReleased[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is released
I0826 18:18:09.253887       1 pv_controller.go:1404] doDeleteVolume [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]
I0826 18:18:09.254061       1 pv_controller.go:1259] deletion of volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) since it's in attaching or detaching state
I0826 18:18:09.254081       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: set phase Failed
I0826 18:18:09.254091       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase Failed already set
E0826 18:18:09.254196       1 goroutinemap.go:150] Operation for "delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]" failed. No retries permitted until 2021-08-26 18:18:11.254177115 +0000 UTC m=+931.112931928 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) since it's in attaching or detaching state
I0826 18:18:09.488445       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="68.299µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49042" resp=200
I0826 18:18:12.416028       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) returned with <nil>
I0826 18:18:12.416071       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4) succeeded
I0826 18:18:12.416104       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 was detached from node:capz-z3rmsd-md-0-sq4fr
I0826 18:18:12.416134       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4") on node "capz-z3rmsd-md-0-sq4fr" 
I0826 18:18:13.877808       1 gc_controller.go:161] GC'ing orphaned
... skipping 50 lines ...
I0826 18:18:24.242066       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:18:24.242080       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c found: phase: Bound, bound to: "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2", bindCompleted: true, boundByController: true
I0826 18:18:24.242095       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: all is bound
I0826 18:18:24.242104       1 pv_controller.go:858] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: set phase Bound
I0826 18:18:24.242113       1 pv_controller.go:861] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase Bound already set
I0826 18:18:24.242126       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" with version 2435
I0826 18:18:24.242148       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase: Failed, bound to: "azuredisk-9828/pvc-fnm8w (uid: 41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4)", boundByController: true
I0826 18:18:24.242176       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is bound to claim azuredisk-9828/pvc-fnm8w
I0826 18:18:24.242195       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: claim azuredisk-9828/pvc-fnm8w not found
I0826 18:18:24.242203       1 pv_controller.go:1108] reclaimVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: policy is Delete
I0826 18:18:24.242219       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]]
I0826 18:18:24.242249       1 pv_controller.go:1231] deleteVolumeOperation [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4] started
I0826 18:18:24.254310       1 pv_controller.go:1340] isVolumeReleased[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is released
... skipping 2 lines ...
I0826 18:18:29.443100       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4
I0826 18:18:29.443138       1 pv_controller.go:1435] volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" deleted
I0826 18:18:29.443154       1 pv_controller.go:1283] deleteVolumeOperation [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: success
I0826 18:18:29.448722       1 pv_protection_controller.go:205] Got event on PV pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4
I0826 18:18:29.448975       1 pv_protection_controller.go:125] Processing PV pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4
I0826 18:18:29.448926       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" with version 2498
I0826 18:18:29.449809       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: phase: Failed, bound to: "azuredisk-9828/pvc-fnm8w (uid: 41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4)", boundByController: true
I0826 18:18:29.449962       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: volume is bound to claim azuredisk-9828/pvc-fnm8w
I0826 18:18:29.458829       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: claim azuredisk-9828/pvc-fnm8w not found
I0826 18:18:29.458944       1 pv_controller.go:1108] reclaimVolume[pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4]: policy is Delete
I0826 18:18:29.459055       1 pv_controller.go:1752] scheduleOperation[delete-pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4[1f3d1118-444d-4dee-b562-a1e88b87717f]]
I0826 18:18:29.459240       1 pv_controller.go:1231] deleteVolumeOperation [pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4] started
I0826 18:18:29.470581       1 pv_controller_base.go:235] volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" deleted
I0826 18:18:29.470589       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4
I0826 18:18:29.471024       1 pv_protection_controller.go:128] Finished processing PV pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4 (22.029941ms)
I0826 18:18:29.471076       1 pv_controller_base.go:505] deletion of claim "azuredisk-9828/pvc-fnm8w" was already processed
I0826 18:18:29.477479       1 pv_controller.go:1238] error reading persistent volume "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4": persistentvolumes "pvc-41ac0ee3-3eeb-41fe-91d8-a3268b6cabc4" not found
I0826 18:18:29.488515       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="89.299µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49228" resp=200
I0826 18:18:33.878602       1 gc_controller.go:161] GC'ing orphaned
I0826 18:18:33.878638       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:18:34.171745       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-66vk2"
I0826 18:18:34.173223       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-66vk2, PodDisruptionBudget controller will avoid syncing.
I0826 18:18:34.173795       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-66vk2"
... skipping 192 lines ...
I0826 18:19:05.263084       1 pv_controller.go:1108] reclaimVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: policy is Delete
I0826 18:19:05.263167       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]]
I0826 18:19:05.263209       1 pv_controller.go:1763] operation "delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]" is already running, skipping
I0826 18:19:05.263341       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2] started
I0826 18:19:05.265609       1 pv_controller.go:1340] isVolumeReleased[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is released
I0826 18:19:05.265637       1 pv_controller.go:1404] doDeleteVolume [pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]
I0826 18:19:05.287968       1 pv_controller.go:1259] deletion of volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:19:05.288117       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: set phase Failed
I0826 18:19:05.288143       1 pv_controller.go:858] updating PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: set phase Failed
I0826 18:19:05.292679       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" with version 2561
I0826 18:19:05.292723       1 pv_controller.go:879] volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" entered phase "Failed"
I0826 18:19:05.292854       1 pv_controller.go:901] volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
E0826 18:19:05.292961       1 goroutinemap.go:150] Operation for "delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]" failed. No retries permitted until 2021-08-26 18:19:05.792936546 +0000 UTC m=+985.651691459 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:19:05.293288       1 event.go:291] "Event occurred" object="pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
I0826 18:19:05.293501       1 pv_protection_controller.go:205] Got event on PV pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2
I0826 18:19:05.293641       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" with version 2561
I0826 18:19:05.293806       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase: Failed, bound to: "azuredisk-9828/pvc-sjc5c (uid: a74f28b0-80c4-4070-9e13-7a907ffce6b2)", boundByController: true
I0826 18:19:05.293967       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:19:05.294109       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c not found
I0826 18:19:05.294215       1 pv_controller.go:1108] reclaimVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: policy is Delete
I0826 18:19:05.294382       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]]
I0826 18:19:05.294499       1 pv_controller.go:1765] operation "delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]" postponed due to exponential backoff
I0826 18:19:06.745283       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodDisruptionBudget total 0 items received
... skipping 14 lines ...
I0826 18:19:09.243685       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is bound to claim azuredisk-9828/pvc-cg64p
I0826 18:19:09.243705       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: claim azuredisk-9828/pvc-cg64p found: phase: Bound, bound to: "pvc-eac92eb7-a833-4621-934d-e781bb0d6573", bindCompleted: true, boundByController: true
I0826 18:19:09.243723       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: all is bound
I0826 18:19:09.243733       1 pv_controller.go:858] updating PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: set phase Bound
I0826 18:19:09.243745       1 pv_controller.go:861] updating PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase Bound already set
I0826 18:19:09.243765       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" with version 2561
I0826 18:19:09.243791       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase: Failed, bound to: "azuredisk-9828/pvc-sjc5c (uid: a74f28b0-80c4-4070-9e13-7a907ffce6b2)", boundByController: true
I0826 18:19:09.243816       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:19:09.243837       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c not found
I0826 18:19:09.243846       1 pv_controller.go:1108] reclaimVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: policy is Delete
I0826 18:19:09.243863       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]]
I0826 18:19:09.243896       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2] started
I0826 18:19:09.244241       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9828/pvc-cg64p" with version 2052
... skipping 11 lines ...
I0826 18:19:09.244413       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9828/pvc-cg64p] status: phase Bound already set
I0826 18:19:09.244424       1 pv_controller.go:1038] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" bound to claim "azuredisk-9828/pvc-cg64p"
I0826 18:19:09.244440       1 pv_controller.go:1039] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" status after binding: phase: Bound, bound to: "azuredisk-9828/pvc-cg64p (uid: eac92eb7-a833-4621-934d-e781bb0d6573)", boundByController: true
I0826 18:19:09.244456       1 pv_controller.go:1040] claim "azuredisk-9828/pvc-cg64p" status after binding: phase: Bound, bound to: "pvc-eac92eb7-a833-4621-934d-e781bb0d6573", bindCompleted: true, boundByController: true
I0826 18:19:09.249721       1 pv_controller.go:1340] isVolumeReleased[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is released
I0826 18:19:09.249757       1 pv_controller.go:1404] doDeleteVolume [pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]
I0826 18:19:09.249806       1 pv_controller.go:1259] deletion of volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2) since it's in attaching or detaching state
I0826 18:19:09.249818       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: set phase Failed
I0826 18:19:09.249827       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase Failed already set
E0826 18:19:09.249850       1 goroutinemap.go:150] Operation for "delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]" failed. No retries permitted until 2021-08-26 18:19:10.249835511 +0000 UTC m=+990.108590324 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2) since it's in attaching or detaching state
I0826 18:19:09.420688       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-58bbv ReadyCondition updated. Updating timestamp.
I0826 18:19:09.488768       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="63.299µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49638" resp=200
I0826 18:19:10.186662       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 6 items received
I0826 18:19:13.879930       1 gc_controller.go:161] GC'ing orphaned
I0826 18:19:13.879973       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:19:15.993257       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
... skipping 10 lines ...
I0826 18:19:24.243866       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is bound to claim azuredisk-9828/pvc-cg64p
I0826 18:19:24.243883       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: claim azuredisk-9828/pvc-cg64p found: phase: Bound, bound to: "pvc-eac92eb7-a833-4621-934d-e781bb0d6573", bindCompleted: true, boundByController: true
I0826 18:19:24.243897       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: all is bound
I0826 18:19:24.243906       1 pv_controller.go:858] updating PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: set phase Bound
I0826 18:19:24.243916       1 pv_controller.go:861] updating PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase Bound already set
I0826 18:19:24.243934       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" with version 2561
I0826 18:19:24.243955       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase: Failed, bound to: "azuredisk-9828/pvc-sjc5c (uid: a74f28b0-80c4-4070-9e13-7a907ffce6b2)", boundByController: true
I0826 18:19:24.243976       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:19:24.243991       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c not found
I0826 18:19:24.243998       1 pv_controller.go:1108] reclaimVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: policy is Delete
I0826 18:19:24.244015       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]]
I0826 18:19:24.244044       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2] started
I0826 18:19:24.244103       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9828/pvc-cg64p" with version 2052
... skipping 19 lines ...
I0826 18:19:29.447878       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2
I0826 18:19:29.447931       1 pv_controller.go:1435] volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" deleted
I0826 18:19:29.448120       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: success
I0826 18:19:29.455695       1 pv_protection_controller.go:205] Got event on PV pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2
I0826 18:19:29.456926       1 pv_protection_controller.go:125] Processing PV pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2
I0826 18:19:29.457469       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2" with version 2598
I0826 18:19:29.457653       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: phase: Failed, bound to: "azuredisk-9828/pvc-sjc5c (uid: a74f28b0-80c4-4070-9e13-7a907ffce6b2)", boundByController: true
I0826 18:19:29.457827       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: volume is bound to claim azuredisk-9828/pvc-sjc5c
I0826 18:19:29.457977       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: claim azuredisk-9828/pvc-sjc5c not found
I0826 18:19:29.458116       1 pv_controller.go:1108] reclaimVolume[pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2]: policy is Delete
I0826 18:19:29.458261       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]]
I0826 18:19:29.458403       1 pv_controller.go:1763] operation "delete-pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2[2f3de074-14de-4b1e-b7be-b11f8eeaacfb]" is already running, skipping
I0826 18:19:29.470474       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-a74f28b0-80c4-4070-9e13-7a907ffce6b2
... skipping 155 lines ...
I0826 18:20:03.187481       1 pv_controller.go:1108] reclaimVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: policy is Delete
I0826 18:20:03.187548       1 pv_controller.go:1752] scheduleOperation[delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]]
I0826 18:20:03.187591       1 pv_controller.go:1763] operation "delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]" is already running, skipping
I0826 18:20:03.189961       1 pv_controller.go:1340] isVolumeReleased[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is released
I0826 18:20:03.190006       1 pv_controller.go:1404] doDeleteVolume [pvc-eac92eb7-a833-4621-934d-e781bb0d6573]
I0826 18:20:03.194336       1 actual_state_of_world.go:427] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 on node "capz-z3rmsd-md-0-sq4fr"
I0826 18:20:03.220460       1 pv_controller.go:1259] deletion of volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:20:03.220491       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: set phase Failed
I0826 18:20:03.220502       1 pv_controller.go:858] updating PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: set phase Failed
I0826 18:20:03.226082       1 pv_protection_controller.go:205] Got event on PV pvc-eac92eb7-a833-4621-934d-e781bb0d6573
I0826 18:20:03.226464       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" with version 2660
I0826 18:20:03.226649       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase: Failed, bound to: "azuredisk-9828/pvc-cg64p (uid: eac92eb7-a833-4621-934d-e781bb0d6573)", boundByController: true
I0826 18:20:03.226867       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is bound to claim azuredisk-9828/pvc-cg64p
I0826 18:20:03.227036       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: claim azuredisk-9828/pvc-cg64p not found
I0826 18:20:03.227249       1 pv_controller.go:1108] reclaimVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: policy is Delete
I0826 18:20:03.227440       1 pv_controller.go:1752] scheduleOperation[delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]]
I0826 18:20:03.227614       1 pv_controller.go:1763] operation "delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]" is already running, skipping
I0826 18:20:03.227390       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" with version 2660
I0826 18:20:03.227891       1 pv_controller.go:879] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" entered phase "Failed"
I0826 18:20:03.228083       1 pv_controller.go:901] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
E0826 18:20:03.228377       1 goroutinemap.go:150] Operation for "delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]" failed. No retries permitted until 2021-08-26 18:20:03.728305311 +0000 UTC m=+1043.587060424 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:20:03.228572       1 event.go:291] "Event occurred" object="pvc-eac92eb7-a833-4621-934d-e781bb0d6573" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted"
I0826 18:20:06.981491       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
I0826 18:20:06.981530       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 to the node "capz-z3rmsd-md-0-sq4fr" mounted false
I0826 18:20:07.035681       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-z3rmsd-md-0-sq4fr" succeeded. VolumesAttached: []
I0826 18:20:07.035771       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573") on node "capz-z3rmsd-md-0-sq4fr" 
I0826 18:20:07.036579       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
... skipping 2 lines ...
I0826 18:20:07.040000       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 from node "capz-z3rmsd-md-0-sq4fr"
I0826 18:20:07.113973       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573"
I0826 18:20:07.114005       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573)
I0826 18:20:09.219184       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:20:09.247749       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:20:09.248063       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" with version 2660
I0826 18:20:09.248210       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase: Failed, bound to: "azuredisk-9828/pvc-cg64p (uid: eac92eb7-a833-4621-934d-e781bb0d6573)", boundByController: true
I0826 18:20:09.248330       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is bound to claim azuredisk-9828/pvc-cg64p
I0826 18:20:09.248376       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: claim azuredisk-9828/pvc-cg64p not found
I0826 18:20:09.248386       1 pv_controller.go:1108] reclaimVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: policy is Delete
I0826 18:20:09.248475       1 pv_controller.go:1752] scheduleOperation[delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]]
I0826 18:20:09.248690       1 pv_controller.go:1231] deleteVolumeOperation [pvc-eac92eb7-a833-4621-934d-e781bb0d6573] started
I0826 18:20:09.255741       1 pv_controller.go:1340] isVolumeReleased[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is released
I0826 18:20:09.255772       1 pv_controller.go:1404] doDeleteVolume [pvc-eac92eb7-a833-4621-934d-e781bb0d6573]
I0826 18:20:09.255809       1 pv_controller.go:1259] deletion of volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573) since it's in attaching or detaching state
I0826 18:20:09.255823       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: set phase Failed
I0826 18:20:09.255832       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase Failed already set
E0826 18:20:09.255865       1 goroutinemap.go:150] Operation for "delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]" failed. No retries permitted until 2021-08-26 18:20:10.255841704 +0000 UTC m=+1050.114596617 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573) since it's in attaching or detaching state
I0826 18:20:09.430441       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-sq4fr ReadyCondition updated. Updating timestamp.
I0826 18:20:09.488225       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="76.899µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50232" resp=200
I0826 18:20:13.881764       1 gc_controller.go:161] GC'ing orphaned
I0826 18:20:13.881797       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:20:14.372253       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 21 items received
I0826 18:20:19.488789       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="108.499µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50328" resp=200
... skipping 2 lines ...
I0826 18:20:22.662796       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573 was detached from node:capz-z3rmsd-md-0-sq4fr
I0826 18:20:22.662823       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573") on node "capz-z3rmsd-md-0-sq4fr" 
I0826 18:20:24.200189       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:20:24.219628       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:20:24.249142       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:20:24.249229       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" with version 2660
I0826 18:20:24.249323       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase: Failed, bound to: "azuredisk-9828/pvc-cg64p (uid: eac92eb7-a833-4621-934d-e781bb0d6573)", boundByController: true
I0826 18:20:24.249421       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is bound to claim azuredisk-9828/pvc-cg64p
I0826 18:20:24.249516       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: claim azuredisk-9828/pvc-cg64p not found
I0826 18:20:24.249529       1 pv_controller.go:1108] reclaimVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: policy is Delete
I0826 18:20:24.249593       1 pv_controller.go:1752] scheduleOperation[delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]]
I0826 18:20:24.249723       1 pv_controller.go:1231] deleteVolumeOperation [pvc-eac92eb7-a833-4621-934d-e781bb0d6573] started
I0826 18:20:24.255784       1 pv_controller.go:1340] isVolumeReleased[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is released
... skipping 4 lines ...
I0826 18:20:29.434535       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-eac92eb7-a833-4621-934d-e781bb0d6573
I0826 18:20:29.434588       1 pv_controller.go:1435] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" deleted
I0826 18:20:29.434605       1 pv_controller.go:1283] deleteVolumeOperation [pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: success
I0826 18:20:29.440824       1 pv_protection_controller.go:205] Got event on PV pvc-eac92eb7-a833-4621-934d-e781bb0d6573
I0826 18:20:29.440853       1 pv_protection_controller.go:125] Processing PV pvc-eac92eb7-a833-4621-934d-e781bb0d6573
I0826 18:20:29.441318       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" with version 2702
I0826 18:20:29.441357       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: phase: Failed, bound to: "azuredisk-9828/pvc-cg64p (uid: eac92eb7-a833-4621-934d-e781bb0d6573)", boundByController: true
I0826 18:20:29.441388       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: volume is bound to claim azuredisk-9828/pvc-cg64p
I0826 18:20:29.441413       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: claim azuredisk-9828/pvc-cg64p not found
I0826 18:20:29.441439       1 pv_controller.go:1108] reclaimVolume[pvc-eac92eb7-a833-4621-934d-e781bb0d6573]: policy is Delete
I0826 18:20:29.441470       1 pv_controller.go:1752] scheduleOperation[delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]]
I0826 18:20:29.441478       1 pv_controller.go:1763] operation "delete-pvc-eac92eb7-a833-4621-934d-e781bb0d6573[862f7e43-446c-444f-b942-6ac4a1f8c5f6]" is already running, skipping
I0826 18:20:29.451168       1 pv_controller_base.go:235] volume "pvc-eac92eb7-a833-4621-934d-e781bb0d6573" deleted
... skipping 45 lines ...
I0826 18:20:36.551768       1 pv_controller.go:1485] provisionClaimOperation [azuredisk-1563/pvc-t5cbw] started, class: "azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-dsbqs"
I0826 18:20:36.551979       1 pv_controller.go:1500] provisionClaimOperation [azuredisk-1563/pvc-t5cbw]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0826 18:20:36.553635       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59"
I0826 18:20:36.556326       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59" (25.345999ms)
I0826 18:20:36.556617       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59", timestamp:time.Time{wall:0xc04214bd1fa6aa79, ext:1076389771226, loc:(*time.Location)(0x7505dc0)}}
I0826 18:20:36.556535       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv" duration="30.30106ms"
I0826 18:20:36.557566       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-7d6gv\": the object has been modified; please apply your changes to the latest version and try again"
I0826 18:20:36.557736       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv" startTime="2021-08-26 18:20:36.557711101 +0000 UTC m=+1076.416465914"
I0826 18:20:36.558563       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-7d6gv" timed out (false) [last progress check: 2021-08-26 18:20:36 +0000 UTC - now: 2021-08-26 18:20:36.558556895 +0000 UTC m=+1076.417311808]
I0826 18:20:36.558823       1 pvc_protection_controller.go:353] "Got event on PVC" azuredisk-1563/pvc-t5cbw="(MISSING)"
I0826 18:20:36.558860       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1563/pvc-t5cbw" with version 2733
I0826 18:20:36.558957       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1563/pvc-t5cbw]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0826 18:20:36.558991       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1563/pvc-t5cbw]: no volume found
... skipping 264 lines ...
I0826 18:21:05.778133       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59", timestamp:time.Time{wall:0xc04214c46a814129, ext:1105571868810, loc:(*time.Location)(0x7505dc0)}}
I0826 18:21:05.778356       1 controller_utils.go:948] Ignoring inactive pod azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59-n7b8l in state Running, deletion time 2021-08-26 18:21:35 +0000 UTC
I0826 18:21:05.778521       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59" (392.697µs)
I0826 18:21:05.778712       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-7d6gv-548ccfdc59-rhhd5"
I0826 18:21:05.779184       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-7d6gv-548ccfdc59-rhhd5, PodDisruptionBudget controller will avoid syncing.
I0826 18:21:05.779203       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-7d6gv-548ccfdc59-rhhd5"
W0826 18:21:05.779570       1 reconciler.go:376] Multi-Attach error for volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856") from node "capz-z3rmsd-md-0-58bbv" Volume is already used by pods azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59-n7b8l on node capz-z3rmsd-md-0-sq4fr
I0826 18:21:05.779626       1 event.go:291] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-7d6gv-548ccfdc59-rhhd5" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856\" Volume is already used by pod(s) azuredisk-volume-tester-7d6gv-548ccfdc59-n7b8l"
I0826 18:21:05.784191       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv"
I0826 18:21:05.784756       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv" duration="10.269218ms"
I0826 18:21:05.784944       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv" startTime="2021-08-26 18:21:05.78478493 +0000 UTC m=+1105.643539843"
I0826 18:21:05.785629       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-7d6gv" for a progress check after 593s
I0826 18:21:05.785675       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-7d6gv" duration="874.393µs"
I0826 18:21:08.145289       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
... skipping 413 lines ...
I0826 18:22:49.685949       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: claim azuredisk-1563/pvc-t5cbw not found
I0826 18:22:49.686007       1 pv_controller.go:1108] reclaimVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: policy is Delete
I0826 18:22:49.686058       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]]
I0826 18:22:49.686090       1 pv_controller.go:1763] operation "delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]" is already running, skipping
I0826 18:22:49.687241       1 pv_controller.go:1340] isVolumeReleased[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is released
I0826 18:22:49.687265       1 pv_controller.go:1404] doDeleteVolume [pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]
I0826 18:22:49.709674       1 pv_controller.go:1259] deletion of volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:22:49.709704       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: set phase Failed
I0826 18:22:49.709715       1 pv_controller.go:858] updating PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: set phase Failed
I0826 18:22:49.716466       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" with version 3024
I0826 18:22:49.716525       1 pv_controller.go:879] volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" entered phase "Failed"
I0826 18:22:49.716793       1 pv_controller.go:901] volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
E0826 18:22:49.716848       1 goroutinemap.go:150] Operation for "delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]" failed. No retries permitted until 2021-08-26 18:22:50.216824113 +0000 UTC m=+1210.075579026 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:22:49.716474       1 pv_protection_controller.go:205] Got event on PV pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856
I0826 18:22:49.717099       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" with version 3024
I0826 18:22:49.717279       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: phase: Failed, bound to: "azuredisk-1563/pvc-t5cbw (uid: b4e8eca1-0bbf-42bb-b965-38acfa929856)", boundByController: true
I0826 18:22:49.717449       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is bound to claim azuredisk-1563/pvc-t5cbw
I0826 18:22:49.717585       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: claim azuredisk-1563/pvc-t5cbw not found
I0826 18:22:49.717728       1 pv_controller.go:1108] reclaimVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: policy is Delete
I0826 18:22:49.717755       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]]
I0826 18:22:49.717286       1 event.go:291] "Event occurred" object="pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
I0826 18:22:49.717879       1 pv_controller.go:1765] operation "delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]" postponed due to exponential backoff
... skipping 8 lines ...
I0826 18:22:53.888471       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:22:53.949326       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0826 18:22:54.204310       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:22:54.229443       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:22:54.258638       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:22:54.258743       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" with version 3024
I0826 18:22:54.258856       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: phase: Failed, bound to: "azuredisk-1563/pvc-t5cbw (uid: b4e8eca1-0bbf-42bb-b965-38acfa929856)", boundByController: true
I0826 18:22:54.258944       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is bound to claim azuredisk-1563/pvc-t5cbw
I0826 18:22:54.258984       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: claim azuredisk-1563/pvc-t5cbw not found
I0826 18:22:54.259020       1 pv_controller.go:1108] reclaimVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: policy is Delete
I0826 18:22:54.259064       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]]
I0826 18:22:54.259216       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856] started
I0826 18:22:54.274597       1 pv_controller.go:1340] isVolumeReleased[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is released
I0826 18:22:54.274623       1 pv_controller.go:1404] doDeleteVolume [pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]
I0826 18:22:54.313484       1 pv_controller.go:1259] deletion of volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:22:54.313517       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: set phase Failed
I0826 18:22:54.313531       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: phase Failed already set
E0826 18:22:54.313755       1 goroutinemap.go:150] Operation for "delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]" failed. No retries permitted until 2021-08-26 18:22:55.313735476 +0000 UTC m=+1215.172490289 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:22:55.834031       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0826 18:22:58.231375       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
I0826 18:22:58.231407       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:22:58.254518       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
I0826 18:22:58.254725       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:22:58.254642       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-z3rmsd-md-0-58bbv" succeeded. VolumesAttached: []
... skipping 11 lines ...
I0826 18:23:08.720608       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856) succeeded
I0826 18:23:08.721262       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856 was detached from node:capz-z3rmsd-md-0-58bbv
I0826 18:23:08.721336       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856") on node "capz-z3rmsd-md-0-58bbv" 
I0826 18:23:09.230470       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:23:09.258820       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:23:09.258921       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" with version 3024
I0826 18:23:09.259156       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: phase: Failed, bound to: "azuredisk-1563/pvc-t5cbw (uid: b4e8eca1-0bbf-42bb-b965-38acfa929856)", boundByController: true
I0826 18:23:09.259233       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is bound to claim azuredisk-1563/pvc-t5cbw
I0826 18:23:09.259261       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: claim azuredisk-1563/pvc-t5cbw not found
I0826 18:23:09.259294       1 pv_controller.go:1108] reclaimVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: policy is Delete
I0826 18:23:09.259315       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]]
I0826 18:23:09.259394       1 pv_controller.go:1231] deleteVolumeOperation [pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856] started
I0826 18:23:09.266142       1 pv_controller.go:1340] isVolumeReleased[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is released
... skipping 4 lines ...
I0826 18:23:14.468682       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856
I0826 18:23:14.468815       1 pv_controller.go:1435] volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" deleted
I0826 18:23:14.468867       1 pv_controller.go:1283] deleteVolumeOperation [pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: success
I0826 18:23:14.482518       1 pv_protection_controller.go:205] Got event on PV pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856
I0826 18:23:14.482547       1 pv_protection_controller.go:125] Processing PV pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856
I0826 18:23:14.482934       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856" with version 3061
I0826 18:23:14.482969       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: phase: Failed, bound to: "azuredisk-1563/pvc-t5cbw (uid: b4e8eca1-0bbf-42bb-b965-38acfa929856)", boundByController: true
I0826 18:23:14.482994       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: volume is bound to claim azuredisk-1563/pvc-t5cbw
I0826 18:23:14.483012       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: claim azuredisk-1563/pvc-t5cbw not found
I0826 18:23:14.483020       1 pv_controller.go:1108] reclaimVolume[pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856]: policy is Delete
I0826 18:23:14.483034       1 pv_controller.go:1752] scheduleOperation[delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]]
I0826 18:23:14.483053       1 pv_controller.go:1763] operation "delete-pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856[52ba041e-5c05-41e3-a8f4-2192618c11ff]" is already running, skipping
I0826 18:23:14.490181       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-b4e8eca1-0bbf-42bb-b965-38acfa929856
... skipping 197 lines ...
I0826 18:23:33.889836       1 gc_controller.go:161] GC'ing orphaned
I0826 18:23:33.889870       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:23:33.914965       1 publisher.go:186] Finished syncing namespace "azuredisk-9336" (22.761169ms)
I0826 18:23:33.915160       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9336" (21.886674ms)
I0826 18:23:34.012287       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1577
I0826 18:23:34.058501       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1577, name default-token-5cx6h, uid 1ca570a6-b586-44c2-a3ea-68f8e5f394df, event type delete
E0826 18:23:34.076695       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1577/default: secrets "default-token-c96vj" is forbidden: unable to create new content in namespace azuredisk-1577 because it is being terminated
I0826 18:23:34.107471       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1577, name pvc-49dlq.169eee5d9bd736ba, uid 1988389b-3df3-4f52-9088-b0a82a2a1122, event type delete
I0826 18:23:34.155577       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1577/default), service account deleted, removing tokens
I0826 18:23:34.155651       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1577, name default, uid 3674c63f-4ad8-45dd-a748-cac4c567c120, event type delete
I0826 18:23:34.155684       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1577" (1.8µs)
I0826 18:23:34.162554       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1577, name kube-root-ca.crt, uid 5ecd3555-c1b5-4b8f-9404-e92a07569bab, event type delete
I0826 18:23:34.165138       1 publisher.go:186] Finished syncing namespace "azuredisk-1577" (2.547385ms)
... skipping 2 lines ...
I0826 18:23:34.213184       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1577" (203.981422ms)
I0826 18:23:36.171324       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9336" (3.4µs)
I0826 18:23:36.304977       1 publisher.go:186] Finished syncing namespace "azuredisk-552" (11.152716ms)
I0826 18:23:36.308094       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-552" (13.718896ms)
I0826 18:23:36.435772       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-953
I0826 18:23:36.497275       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-953, name default-token-8mm8q, uid 68d3aa93-ea0c-48ef-9ce7-fe7d70fbf0fc, event type delete
E0826 18:23:36.510832       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-953/default: secrets "default-token-rjtnd" is forbidden: unable to create new content in namespace azuredisk-953 because it is being terminated
I0826 18:23:36.557316       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-953, name kube-root-ca.crt, uid 71f56c23-6b70-4ab0-a586-9c110a65e465, event type delete
I0826 18:23:36.560209       1 publisher.go:186] Finished syncing namespace "azuredisk-953" (2.845678ms)
I0826 18:23:36.594232       1 tokens_controller.go:252] syncServiceAccount(azuredisk-953/default), service account deleted, removing tokens
I0826 18:23:36.594335       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-953, name default, uid a4c2b318-b076-429b-946c-c8fbf86e56c0, event type delete
I0826 18:23:36.594389       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-953" (1.8µs)
I0826 18:23:36.629404       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-953, estimate: 0, errors: <nil>
... skipping 599 lines ...
I0826 18:24:20.093787       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:24:20.093811       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
I0826 18:24:20.093824       1 pv_controller.go:1752] scheduleOperation[delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]]
I0826 18:24:20.093910       1 pv_controller.go:1763] operation "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" is already running, skipping
I0826 18:24:20.095704       1 pv_controller.go:1340] isVolumeReleased[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is released
I0826 18:24:20.095736       1 pv_controller.go:1404] doDeleteVolume [pvc-25e25903-6ef8-488b-b79a-171f0f80078f]
I0826 18:24:20.120353       1 pv_controller.go:1259] deletion of volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:20.120383       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: set phase Failed
I0826 18:24:20.120396       1 pv_controller.go:858] updating PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: set phase Failed
I0826 18:24:20.133965       1 pv_protection_controller.go:205] Got event on PV pvc-25e25903-6ef8-488b-b79a-171f0f80078f
I0826 18:24:20.134294       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:24:20.134417       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:24:20.134609       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:24:20.134714       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:24:20.134977       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
I0826 18:24:20.135233       1 pv_controller.go:1752] scheduleOperation[delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]]
I0826 18:24:20.135359       1 pv_controller.go:1763] operation "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" is already running, skipping
I0826 18:24:20.145600       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:24:20.145757       1 pv_controller.go:879] volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" entered phase "Failed"
I0826 18:24:20.145865       1 pv_controller.go:901] volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
E0826 18:24:20.146008       1 goroutinemap.go:150] Operation for "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" failed. No retries permitted until 2021-08-26 18:24:20.645981769 +0000 UTC m=+1300.504736682 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:20.146769       1 event.go:291] "Event occurred" object="pvc-25e25903-6ef8-488b-b79a-171f0f80078f" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted"
I0826 18:24:24.206333       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:24:24.234342       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:24:24.262770       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:24:24.263012       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" with version 3208
I0826 18:24:24.263053       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22]: phase: Bound, bound to: "azuredisk-552/pvc-mrcqb (uid: e52187e9-9e71-43cf-a975-dbcfdbb0af22)", boundByController: true
... skipping 22 lines ...
I0826 18:24:24.263315       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-552/pvc-mrcqb] status: phase Bound already set
I0826 18:24:24.263319       1 pv_controller.go:858] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: set phase Bound
I0826 18:24:24.263327       1 pv_controller.go:861] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: phase Bound already set
I0826 18:24:24.263327       1 pv_controller.go:1038] volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" bound to claim "azuredisk-552/pvc-mrcqb"
I0826 18:24:24.263339       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:24:24.263344       1 pv_controller.go:1039] volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" status after binding: phase: Bound, bound to: "azuredisk-552/pvc-mrcqb (uid: e52187e9-9e71-43cf-a975-dbcfdbb0af22)", boundByController: true
I0826 18:24:24.263356       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:24:24.263358       1 pv_controller.go:1040] claim "azuredisk-552/pvc-mrcqb" status after binding: phase: Bound, bound to: "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22", bindCompleted: true, boundByController: true
I0826 18:24:24.263372       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-552/pvc-nl6mg" with version 3216
I0826 18:24:24.263376       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:24:24.263382       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-552/pvc-nl6mg]: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:24:24.263393       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:24:24.263400       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
... skipping 12 lines ...
I0826 18:24:24.264124       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-552/pvc-nl6mg] status: phase Bound already set
I0826 18:24:24.264308       1 pv_controller.go:1038] volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" bound to claim "azuredisk-552/pvc-nl6mg"
I0826 18:24:24.264541       1 pv_controller.go:1039] volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" status after binding: phase: Bound, bound to: "azuredisk-552/pvc-nl6mg (uid: 710847eb-f3fa-4c42-bd80-e87e4c22d6f0)", boundByController: true
I0826 18:24:24.264696       1 pv_controller.go:1040] claim "azuredisk-552/pvc-nl6mg" status after binding: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:24:24.276613       1 pv_controller.go:1340] isVolumeReleased[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is released
I0826 18:24:24.276648       1 pv_controller.go:1404] doDeleteVolume [pvc-25e25903-6ef8-488b-b79a-171f0f80078f]
I0826 18:24:24.298351       1 pv_controller.go:1259] deletion of volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:24.298373       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: set phase Failed
I0826 18:24:24.298382       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase Failed already set
E0826 18:24:24.298474       1 goroutinemap.go:150] Operation for "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" failed. No retries permitted until 2021-08-26 18:24:25.298399852 +0000 UTC m=+1305.157154765 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:28.146253       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
I0826 18:24:28.146320       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22 to the node "capz-z3rmsd-md-0-sq4fr" mounted false
I0826 18:24:28.146459       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0 to the node "capz-z3rmsd-md-0-sq4fr" mounted false
I0826 18:24:28.146494       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f to the node "capz-z3rmsd-md-0-sq4fr" mounted false
I0826 18:24:28.203957       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
I0826 18:24:28.204002       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22 to the node "capz-z3rmsd-md-0-sq4fr" mounted false
... skipping 64 lines ...
I0826 18:24:39.264307       1 pv_controller.go:1038] volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" bound to claim "azuredisk-552/pvc-mrcqb"
I0826 18:24:39.264312       1 pv_controller.go:858] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: set phase Bound
I0826 18:24:39.264323       1 pv_controller.go:861] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: phase Bound already set
I0826 18:24:39.264327       1 pv_controller.go:1039] volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" status after binding: phase: Bound, bound to: "azuredisk-552/pvc-mrcqb (uid: e52187e9-9e71-43cf-a975-dbcfdbb0af22)", boundByController: true
I0826 18:24:39.264337       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:24:39.264344       1 pv_controller.go:1040] claim "azuredisk-552/pvc-mrcqb" status after binding: phase: Bound, bound to: "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22", bindCompleted: true, boundByController: true
I0826 18:24:39.264357       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:24:39.264358       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-552/pvc-nl6mg" with version 3216
I0826 18:24:39.264386       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-552/pvc-nl6mg]: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:24:39.264396       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:24:39.264407       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-552/pvc-nl6mg]: volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" found: phase: Bound, bound to: "azuredisk-552/pvc-nl6mg (uid: 710847eb-f3fa-4c42-bd80-e87e4c22d6f0)", boundByController: true
I0826 18:24:39.264416       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-552/pvc-nl6mg]: claim is already correctly bound
I0826 18:24:39.264416       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
... skipping 11 lines ...
I0826 18:24:39.264792       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-552/pvc-nl6mg] status: phase Bound already set
I0826 18:24:39.264866       1 pv_controller.go:1038] volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" bound to claim "azuredisk-552/pvc-nl6mg"
I0826 18:24:39.264927       1 pv_controller.go:1039] volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" status after binding: phase: Bound, bound to: "azuredisk-552/pvc-nl6mg (uid: 710847eb-f3fa-4c42-bd80-e87e4c22d6f0)", boundByController: true
I0826 18:24:39.265094       1 pv_controller.go:1040] claim "azuredisk-552/pvc-nl6mg" status after binding: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:24:39.270181       1 pv_controller.go:1340] isVolumeReleased[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is released
I0826 18:24:39.270206       1 pv_controller.go:1404] doDeleteVolume [pvc-25e25903-6ef8-488b-b79a-171f0f80078f]
I0826 18:24:39.291271       1 pv_controller.go:1259] deletion of volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:39.291345       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: set phase Failed
I0826 18:24:39.291360       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase Failed already set
E0826 18:24:39.291545       1 goroutinemap.go:150] Operation for "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" failed. No retries permitted until 2021-08-26 18:24:41.29152651 +0000 UTC m=+1321.150281323 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:39.487755       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="54.9µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:52854" resp=200
I0826 18:24:43.709117       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22) returned with <nil>
I0826 18:24:43.709334       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22) succeeded
I0826 18:24:43.709355       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22 was detached from node:capz-z3rmsd-md-0-sq4fr
I0826 18:24:43.709384       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22") on node "capz-z3rmsd-md-0-sq4fr" 
I0826 18:24:43.751739       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0"
... skipping 37 lines ...
I0826 18:24:54.266738       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-552/pvc-nl6mg] status: set phase Bound
I0826 18:24:54.266761       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-552/pvc-nl6mg] status: phase Bound already set
I0826 18:24:54.266779       1 pv_controller.go:1038] volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" bound to claim "azuredisk-552/pvc-nl6mg"
I0826 18:24:54.266798       1 pv_controller.go:1039] volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" status after binding: phase: Bound, bound to: "azuredisk-552/pvc-nl6mg (uid: 710847eb-f3fa-4c42-bd80-e87e4c22d6f0)", boundByController: true
I0826 18:24:54.266817       1 pv_controller.go:1040] claim "azuredisk-552/pvc-nl6mg" status after binding: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:24:54.266848       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:24:54.266890       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:24:54.266916       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:24:54.266937       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:24:54.266948       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
I0826 18:24:54.266965       1 pv_controller.go:1752] scheduleOperation[delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]]
I0826 18:24:54.266994       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22" with version 3208
I0826 18:24:54.267018       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e52187e9-9e71-43cf-a975-dbcfdbb0af22]: phase: Bound, bound to: "azuredisk-552/pvc-mrcqb (uid: e52187e9-9e71-43cf-a975-dbcfdbb0af22)", boundByController: true
... skipping 9 lines ...
I0826 18:24:54.267185       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: all is bound
I0826 18:24:54.267193       1 pv_controller.go:858] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: set phase Bound
I0826 18:24:54.267225       1 pv_controller.go:861] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: phase Bound already set
I0826 18:24:54.267294       1 pv_controller.go:1231] deleteVolumeOperation [pvc-25e25903-6ef8-488b-b79a-171f0f80078f] started
I0826 18:24:54.272623       1 pv_controller.go:1340] isVolumeReleased[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is released
I0826 18:24:54.272652       1 pv_controller.go:1404] doDeleteVolume [pvc-25e25903-6ef8-488b-b79a-171f0f80078f]
I0826 18:24:54.308043       1 pv_controller.go:1259] deletion of volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:54.308068       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: set phase Failed
I0826 18:24:54.308080       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase Failed already set
E0826 18:24:54.308226       1 goroutinemap.go:150] Operation for "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" failed. No retries permitted until 2021-08-26 18:24:58.308089616 +0000 UTC m=+1338.166844429 (durationBeforeRetry 4s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-sq4fr), could not be deleted
I0826 18:24:59.132137       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0) returned with <nil>
I0826 18:24:59.132317       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0) succeeded
I0826 18:24:59.135068       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0 was detached from node:capz-z3rmsd-md-0-sq4fr
I0826 18:24:59.135131       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0") on node "capz-z3rmsd-md-0-sq4fr" 
I0826 18:24:59.173000       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f"
I0826 18:24:59.173033       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f)
... skipping 46 lines ...
I0826 18:25:09.266381       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: volume is bound to claim azuredisk-552/pvc-nl6mg
I0826 18:25:09.266398       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: claim azuredisk-552/pvc-nl6mg found: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:25:09.266417       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: all is bound
I0826 18:25:09.266425       1 pv_controller.go:858] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: set phase Bound
I0826 18:25:09.266434       1 pv_controller.go:861] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: phase Bound already set
I0826 18:25:09.266448       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:25:09.266470       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:25:09.266493       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:25:09.266513       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:25:09.266523       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
I0826 18:25:09.266538       1 pv_controller.go:1752] scheduleOperation[delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]]
I0826 18:25:09.266585       1 pv_controller.go:1231] deleteVolumeOperation [pvc-25e25903-6ef8-488b-b79a-171f0f80078f] started
I0826 18:25:09.275900       1 pv_controller.go:1340] isVolumeReleased[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is released
I0826 18:25:09.276209       1 pv_controller.go:1404] doDeleteVolume [pvc-25e25903-6ef8-488b-b79a-171f0f80078f]
I0826 18:25:09.276583       1 pv_controller.go:1259] deletion of volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) since it's in attaching or detaching state
I0826 18:25:09.276772       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: set phase Failed
I0826 18:25:09.276979       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase Failed already set
E0826 18:25:09.277298       1 goroutinemap.go:150] Operation for "delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]" failed. No retries permitted until 2021-08-26 18:25:17.277219218 +0000 UTC m=+1357.135974031 (durationBeforeRetry 8s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) since it's in attaching or detaching state
I0826 18:25:09.487747       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="90.999µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:53138" resp=200
I0826 18:25:12.373417       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 29 items received
I0826 18:25:13.891997       1 gc_controller.go:161] GC'ing orphaned
I0826 18:25:13.892029       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:25:14.761885       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-sq4fr) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) returned with <nil>
I0826 18:25:14.762161       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f) succeeded
... skipping 16 lines ...
I0826 18:25:24.266257       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: volume is bound to claim azuredisk-552/pvc-nl6mg
I0826 18:25:24.266292       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: claim azuredisk-552/pvc-nl6mg found: phase: Bound, bound to: "pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0", bindCompleted: true, boundByController: true
I0826 18:25:24.266306       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: all is bound
I0826 18:25:24.266315       1 pv_controller.go:858] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: set phase Bound
I0826 18:25:24.266327       1 pv_controller.go:861] updating PersistentVolume[pvc-710847eb-f3fa-4c42-bd80-e87e4c22d6f0]: phase Bound already set
I0826 18:25:24.266342       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3303
I0826 18:25:24.266370       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:25:24.266401       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:25:24.266425       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:25:24.266442       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
I0826 18:25:24.266458       1 pv_controller.go:1752] scheduleOperation[delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]]
I0826 18:25:24.266497       1 pv_controller.go:1231] deleteVolumeOperation [pvc-25e25903-6ef8-488b-b79a-171f0f80078f] started
I0826 18:25:24.266788       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-552/pvc-mrcqb" with version 3210
... skipping 34 lines ...
I0826 18:25:29.451358       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-25e25903-6ef8-488b-b79a-171f0f80078f
I0826 18:25:29.451523       1 pv_controller.go:1435] volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" deleted
I0826 18:25:29.451542       1 pv_controller.go:1283] deleteVolumeOperation [pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: success
I0826 18:25:29.459284       1 pv_protection_controller.go:205] Got event on PV pvc-25e25903-6ef8-488b-b79a-171f0f80078f
I0826 18:25:29.459328       1 pv_protection_controller.go:125] Processing PV pvc-25e25903-6ef8-488b-b79a-171f0f80078f
I0826 18:25:29.459957       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" with version 3406
I0826 18:25:29.459992       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: phase: Failed, bound to: "azuredisk-552/pvc-wrztz (uid: 25e25903-6ef8-488b-b79a-171f0f80078f)", boundByController: true
I0826 18:25:29.460017       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: volume is bound to claim azuredisk-552/pvc-wrztz
I0826 18:25:29.460032       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: claim azuredisk-552/pvc-wrztz not found
I0826 18:25:29.460038       1 pv_controller.go:1108] reclaimVolume[pvc-25e25903-6ef8-488b-b79a-171f0f80078f]: policy is Delete
I0826 18:25:29.460052       1 pv_controller.go:1752] scheduleOperation[delete-pvc-25e25903-6ef8-488b-b79a-171f0f80078f[1000b8e7-8ccc-4369-8971-008b8381346d]]
I0826 18:25:29.460076       1 pv_controller.go:1231] deleteVolumeOperation [pvc-25e25903-6ef8-488b-b79a-171f0f80078f] started
I0826 18:25:29.464618       1 pv_controller.go:1243] Volume "pvc-25e25903-6ef8-488b-b79a-171f0f80078f" is already being deleted
... skipping 639 lines ...
I0826 18:26:31.379956       1 pv_controller.go:1108] reclaimVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: policy is Delete
I0826 18:26:31.379969       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]]
I0826 18:26:31.379980       1 pv_controller.go:1763] operation "delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]" is already running, skipping
I0826 18:26:31.380021       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9] started
I0826 18:26:31.382091       1 pv_controller.go:1340] isVolumeReleased[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is released
I0826 18:26:31.382110       1 pv_controller.go:1404] doDeleteVolume [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]
I0826 18:26:31.405661       1 pv_controller.go:1259] deletion of volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:26:31.405690       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: set phase Failed
I0826 18:26:31.405704       1 pv_controller.go:858] updating PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: set phase Failed
I0826 18:26:31.410832       1 pv_protection_controller.go:205] Got event on PV pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9
I0826 18:26:31.410871       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" with version 3579
I0826 18:26:31.410905       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase: Failed, bound to: "azuredisk-1351/pvc-gptqs (uid: bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9)", boundByController: true
I0826 18:26:31.410935       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is bound to claim azuredisk-1351/pvc-gptqs
I0826 18:26:31.410956       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: claim azuredisk-1351/pvc-gptqs not found
I0826 18:26:31.410964       1 pv_controller.go:1108] reclaimVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: policy is Delete
I0826 18:26:31.410979       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]]
I0826 18:26:31.410987       1 pv_controller.go:1763] operation "delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]" is already running, skipping
I0826 18:26:31.417095       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" with version 3579
I0826 18:26:31.417133       1 pv_controller.go:879] volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" entered phase "Failed"
I0826 18:26:31.417194       1 pv_controller.go:901] volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:26:31.417642       1 event.go:291] "Event occurred" object="pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
E0826 18:26:31.417309       1 goroutinemap.go:150] Operation for "delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]" failed. No retries permitted until 2021-08-26 18:26:31.917284365 +0000 UTC m=+1431.776039178 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:26:33.893807       1 gc_controller.go:161] GC'ing orphaned
I0826 18:26:33.893847       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:26:37.791365       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 668 items received
I0826 18:26:39.239399       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:26:39.270397       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:26:39.270524       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" with version 3579
... skipping 10 lines ...
I0826 18:26:39.270937       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-1351/pvc-7nqvt]: already bound to "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f"
I0826 18:26:39.270949       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-1351/pvc-7nqvt] status: set phase Bound
I0826 18:26:39.271001       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1351/pvc-7nqvt] status: phase Bound already set
I0826 18:26:39.271031       1 pv_controller.go:1038] volume "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f" bound to claim "azuredisk-1351/pvc-7nqvt"
I0826 18:26:39.271051       1 pv_controller.go:1039] volume "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f" status after binding: phase: Bound, bound to: "azuredisk-1351/pvc-7nqvt (uid: 6d5f17ea-5008-4840-a80f-6e95294a677f)", boundByController: true
I0826 18:26:39.271069       1 pv_controller.go:1040] claim "azuredisk-1351/pvc-7nqvt" status after binding: phase: Bound, bound to: "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f", bindCompleted: true, boundByController: true
I0826 18:26:39.271119       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase: Failed, bound to: "azuredisk-1351/pvc-gptqs (uid: bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9)", boundByController: true
I0826 18:26:39.271284       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is bound to claim azuredisk-1351/pvc-gptqs
I0826 18:26:39.271306       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: claim azuredisk-1351/pvc-gptqs not found
I0826 18:26:39.271467       1 pv_controller.go:1108] reclaimVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: policy is Delete
I0826 18:26:39.271512       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]]
I0826 18:26:39.271689       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9] started
I0826 18:26:39.271581       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f" with version 3493
... skipping 5 lines ...
I0826 18:26:39.272195       1 pv_controller.go:861] updating PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: phase Bound already set
I0826 18:26:39.353951       1 pv_controller.go:1340] isVolumeReleased[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is released
I0826 18:26:39.353977       1 pv_controller.go:1404] doDeleteVolume [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]
I0826 18:26:39.368492       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
I0826 18:26:39.368544       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:26:39.368557       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6d5f17ea-5008-4840-a80f-6e95294a677f to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:26:39.377501       1 pv_controller.go:1259] deletion of volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:26:39.377534       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: set phase Failed
I0826 18:26:39.377547       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase Failed already set
E0826 18:26:39.377756       1 goroutinemap.go:150] Operation for "delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]" failed. No retries permitted until 2021-08-26 18:26:40.377558479 +0000 UTC m=+1440.236313292 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:26:39.451233       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
I0826 18:26:39.451278       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:26:39.451291       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6d5f17ea-5008-4840-a80f-6e95294a677f to the node "capz-z3rmsd-md-0-58bbv" mounted false
I0826 18:26:39.451802       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6d5f17ea-5008-4840-a80f-6e95294a677f\"}]}}" for node "capz-z3rmsd-md-0-58bbv" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6d5f17ea-5008-4840-a80f-6e95294a677f 1}]
I0826 18:26:39.452163       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9") on node "capz-z3rmsd-md-0-58bbv" 
I0826 18:26:39.456593       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9") on node "capz-z3rmsd-md-0-58bbv" 
... skipping 22 lines ...
I0826 18:26:54.272433       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: volume is bound to claim azuredisk-1351/pvc-7nqvt
I0826 18:26:54.272472       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: claim azuredisk-1351/pvc-7nqvt found: phase: Bound, bound to: "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f", bindCompleted: true, boundByController: true
I0826 18:26:54.272490       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: all is bound
I0826 18:26:54.272527       1 pv_controller.go:858] updating PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: set phase Bound
I0826 18:26:54.272541       1 pv_controller.go:861] updating PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: phase Bound already set
I0826 18:26:54.272557       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" with version 3579
I0826 18:26:54.272581       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase: Failed, bound to: "azuredisk-1351/pvc-gptqs (uid: bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9)", boundByController: true
I0826 18:26:54.272605       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is bound to claim azuredisk-1351/pvc-gptqs
I0826 18:26:54.272662       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: claim azuredisk-1351/pvc-gptqs not found
I0826 18:26:54.272731       1 pv_controller.go:1108] reclaimVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: policy is Delete
I0826 18:26:54.272821       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]]
I0826 18:26:54.272875       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9] started
I0826 18:26:54.272188       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-1351/pvc-7nqvt]: volume "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f" found: phase: Bound, bound to: "azuredisk-1351/pvc-7nqvt (uid: 6d5f17ea-5008-4840-a80f-6e95294a677f)", boundByController: true
... skipping 9 lines ...
I0826 18:26:54.273562       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1351/pvc-7nqvt] status: phase Bound already set
I0826 18:26:54.273787       1 pv_controller.go:1038] volume "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f" bound to claim "azuredisk-1351/pvc-7nqvt"
I0826 18:26:54.274650       1 pv_controller.go:1039] volume "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f" status after binding: phase: Bound, bound to: "azuredisk-1351/pvc-7nqvt (uid: 6d5f17ea-5008-4840-a80f-6e95294a677f)", boundByController: true
I0826 18:26:54.274847       1 pv_controller.go:1040] claim "azuredisk-1351/pvc-7nqvt" status after binding: phase: Bound, bound to: "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f", bindCompleted: true, boundByController: true
I0826 18:26:54.288086       1 pv_controller.go:1340] isVolumeReleased[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is released
I0826 18:26:54.288106       1 pv_controller.go:1404] doDeleteVolume [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]
I0826 18:26:54.288272       1 pv_controller.go:1259] deletion of volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) since it's in attaching or detaching state
I0826 18:26:54.288298       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: set phase Failed
I0826 18:26:54.288310       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase Failed already set
E0826 18:26:54.288443       1 goroutinemap.go:150] Operation for "delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]" failed. No retries permitted until 2021-08-26 18:26:56.288416417 +0000 UTC m=+1456.147171330 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) since it's in attaching or detaching state
I0826 18:26:54.978670       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) returned with <nil>
I0826 18:26:54.978863       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9) succeeded
I0826 18:26:54.978880       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 was detached from node:capz-z3rmsd-md-0-58bbv
I0826 18:26:54.978907       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9") on node "capz-z3rmsd-md-0-58bbv" 
I0826 18:26:55.018698       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6d5f17ea-5008-4840-a80f-6e95294a677f"
I0826 18:26:55.018730       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6d5f17ea-5008-4840-a80f-6e95294a677f)
... skipping 22 lines ...
I0826 18:27:09.273985       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: volume is bound to claim azuredisk-1351/pvc-7nqvt
I0826 18:27:09.274137       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: claim azuredisk-1351/pvc-7nqvt found: phase: Bound, bound to: "pvc-6d5f17ea-5008-4840-a80f-6e95294a677f", bindCompleted: true, boundByController: true
I0826 18:27:09.274279       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: all is bound
I0826 18:27:09.274384       1 pv_controller.go:858] updating PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: set phase Bound
I0826 18:27:09.274495       1 pv_controller.go:861] updating PersistentVolume[pvc-6d5f17ea-5008-4840-a80f-6e95294a677f]: phase Bound already set
I0826 18:27:09.274605       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" with version 3579
I0826 18:27:09.274654       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase: Failed, bound to: "azuredisk-1351/pvc-gptqs (uid: bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9)", boundByController: true
I0826 18:27:09.274765       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is bound to claim azuredisk-1351/pvc-gptqs
I0826 18:27:09.274878       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: claim azuredisk-1351/pvc-gptqs not found
I0826 18:27:09.274904       1 pv_controller.go:1108] reclaimVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: policy is Delete
I0826 18:27:09.275062       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]]
I0826 18:27:09.275208       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9] started
I0826 18:27:09.282148       1 pv_controller.go:1340] isVolumeReleased[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is released
... skipping 9 lines ...
I0826 18:27:14.487678       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9
I0826 18:27:14.487739       1 pv_controller.go:1435] volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" deleted
I0826 18:27:14.487770       1 pv_controller.go:1283] deleteVolumeOperation [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: success
I0826 18:27:14.497723       1 pv_protection_controller.go:205] Got event on PV pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9
I0826 18:27:14.497758       1 pv_protection_controller.go:125] Processing PV pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9
I0826 18:27:14.498131       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" with version 3646
I0826 18:27:14.498685       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: phase: Failed, bound to: "azuredisk-1351/pvc-gptqs (uid: bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9)", boundByController: true
I0826 18:27:14.498887       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: volume is bound to claim azuredisk-1351/pvc-gptqs
I0826 18:27:14.499008       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: claim azuredisk-1351/pvc-gptqs not found
I0826 18:27:14.500413       1 pv_controller.go:1108] reclaimVolume[pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9]: policy is Delete
I0826 18:27:14.500449       1 pv_controller.go:1752] scheduleOperation[delete-pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9[8b486d2b-e024-49ca-b307-1bcf3c46d04f]]
I0826 18:27:14.500572       1 pv_controller.go:1231] deleteVolumeOperation [pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9] started
I0826 18:27:14.509871       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9
I0826 18:27:14.509892       1 pv_protection_controller.go:128] Finished processing PV pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9 (12.124751ms)
I0826 18:27:14.510399       1 pv_controller_base.go:235] volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" deleted
I0826 18:27:14.510615       1 pv_controller_base.go:505] deletion of claim "azuredisk-1351/pvc-gptqs" was already processed
I0826 18:27:14.512212       1 pv_controller.go:1238] error reading persistent volume "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9": persistentvolumes "pvc-bb0c3812-f6bc-4ef0-bd1b-44fd019a08f9" not found
I0826 18:27:18.124115       1 pvc_protection_controller.go:353] "Got event on PVC" azuredisk-1351/pvc-7nqvt="(MISSING)"
I0826 18:27:18.124165       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-1351/pvc-7nqvt"
I0826 18:27:18.124188       1 pvc_protection_controller.go:241] "Looking for Pods using PVC in the Informer's cache" PVC="azuredisk-1351/pvc-7nqvt"
I0826 18:27:18.124205       1 pvc_protection_controller.go:273] "No Pod using PVC was found in the Informer's cache" PVC="azuredisk-1351/pvc-7nqvt"
I0826 18:27:18.124219       1 pvc_protection_controller.go:278] "Looking for Pods using PVC with a live list" PVC="azuredisk-1351/pvc-7nqvt"
I0826 18:27:18.124482       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1351/pvc-7nqvt" with version 3653
... skipping 88 lines ...
I0826 18:27:34.165935       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1351, name azuredisk-volume-tester-94ctq.169eee89a6365c5e, uid 283c07e1-e22c-4644-b1c5-a998ed32653c, event type delete
I0826 18:27:34.172311       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1351, name pvc-7nqvt.169eee80d013870d, uid c08ea071-0201-4d98-96df-63525e0b8b00, event type delete
I0826 18:27:34.176198       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1351, name pvc-7nqvt.169eee8171ed854b, uid 768fc1df-966b-4c87-ade2-05de40f41dc8, event type delete
I0826 18:27:34.180961       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1351, name pvc-gptqs.169eee80ddf7fd26, uid facc2e7f-9a5b-4572-b5f4-0eab2a2e9fc9, event type delete
I0826 18:27:34.189731       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1351, name pvc-gptqs.169eee81765fe66e, uid 14f78b8c-a71a-4a2d-8c03-9d93e8928dfb, event type delete
I0826 18:27:34.232359       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1351, name default-token-p7c5d, uid 47523bc9-49de-42d8-9863-60999efa9257, event type delete
E0826 18:27:34.248142       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1351/default: secrets "default-token-cz29g" is forbidden: unable to create new content in namespace azuredisk-1351 because it is being terminated
I0826 18:27:34.285405       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1351, name kube-root-ca.crt, uid b41a0c8e-8aa8-4b8e-a439-ca1d229c5ba2, event type delete
I0826 18:27:34.288703       1 publisher.go:186] Finished syncing namespace "azuredisk-1351" (3.250859ms)
I0826 18:27:34.304847       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1351/default), service account deleted, removing tokens
I0826 18:27:34.305032       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1351, name default, uid dbf67501-b519-41f8-9f4b-ad11eb7afb29, event type delete
I0826 18:27:34.305155       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1351" (2.3µs)
I0826 18:27:34.316537       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1351, estimate: 0, errors: <nil>
... skipping 12 lines ...
I0826 18:27:36.284323       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-9267/pvc-mrnvx]: no volume found
I0826 18:27:36.284374       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-9267/pvc-mrnvx] status: set phase Pending
I0826 18:27:36.284392       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-9267/pvc-mrnvx] status: phase Pending already set
I0826 18:27:36.284871       1 event.go:291] "Event occurred" object="azuredisk-9267/pvc-mrnvx" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I0826 18:27:36.447166       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3410
I0826 18:27:36.482446       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3410, name default-token-zrc79, uid cbd7b414-1691-427b-a85a-09a1825825f0, event type delete
E0826 18:27:36.495350       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: secrets "default-token-gwnvp" is forbidden: unable to create new content in namespace azuredisk-3410 because it is being terminated
I0826 18:27:36.499744       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3410/default), service account deleted, removing tokens
I0826 18:27:36.499865       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3410, name default, uid 2cadf819-b696-4c22-834b-18dee7bff710, event type delete
I0826 18:27:36.500555       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3410" (2.8µs)
I0826 18:27:36.516118       1 pvc_protection_controller.go:353] "Got event on PVC" azuredisk-9267/pvc-zm2hj="(MISSING)"
I0826 18:27:36.516167       1 pv_controller_base.go:612] storeObjectUpdate: adding claim "azuredisk-9267/pvc-zm2hj", version 3732
I0826 18:27:36.516185       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-9267/pvc-zm2hj]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 69 lines ...
I0826 18:27:36.669807       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9267/pvc-zm2hj" with version 3743
I0826 18:27:36.675146       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 StorageAccountType:StandardSSD_LRS Size:10
I0826 18:27:38.811467       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8553
I0826 18:27:38.901713       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8553, name kube-root-ca.crt, uid 31971c51-9025-403b-869b-590dad706c3f, event type delete
I0826 18:27:38.908488       1 publisher.go:186] Finished syncing namespace "azuredisk-8553" (6.718116ms)
I0826 18:27:38.927844       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8553, name default-token-72qjm, uid 3fa1f80e-4bd6-4d9c-a477-ab1169d635f0, event type delete
E0826 18:27:38.960333       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8553/default: secrets "default-token-6jb2v" is forbidden: unable to create new content in namespace azuredisk-8553 because it is being terminated
I0826 18:27:39.012702       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a StorageAccountType:Standard_LRS Size:10
I0826 18:27:39.036359       1 azure_managedDiskController.go:380] Azure disk "capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" is not zoned
I0826 18:27:39.053391       1 pv_controller.go:1598] volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" for claim "azuredisk-9267/pvc-x7xd6" created
I0826 18:27:39.053487       1 pv_controller.go:1615] provisionClaimOperation [azuredisk-9267/pvc-x7xd6]: trying to save volume pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a
I0826 18:27:39.064057       1 pv_controller.go:1623] volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" for claim "azuredisk-9267/pvc-x7xd6" saved
I0826 18:27:39.064106       1 pv_controller_base.go:612] storeObjectUpdate: adding volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a", version 3749
... skipping 709 lines ...
I0826 18:28:49.487461       1 pv_controller.go:1108] reclaimVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: policy is Delete
I0826 18:28:49.487492       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]]
I0826 18:28:49.487549       1 pv_controller.go:1763] operation "delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]" is already running, skipping
I0826 18:28:49.489840       1 pv_controller.go:1340] isVolumeReleased[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: volume is released
I0826 18:28:49.489859       1 pv_controller.go:1404] doDeleteVolume [pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]
I0826 18:28:49.490862       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="103.599µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55294" resp=200
I0826 18:28:49.540604       1 pv_controller.go:1259] deletion of volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:28:49.540639       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: set phase Failed
I0826 18:28:49.540942       1 pv_controller.go:858] updating PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: set phase Failed
I0826 18:28:49.545640       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" with version 3895
I0826 18:28:49.545690       1 pv_controller.go:879] volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" entered phase "Failed"
I0826 18:28:49.545701       1 pv_controller.go:901] volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
E0826 18:28:49.545783       1 goroutinemap.go:150] Operation for "delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]" failed. No retries permitted until 2021-08-26 18:28:50.045724488 +0000 UTC m=+1569.904479301 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted
I0826 18:28:49.546163       1 event.go:291] "Event occurred" object="pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/virtualMachines/capz-z3rmsd-md-0-58bbv), could not be deleted"
I0826 18:28:49.546183       1 pv_protection_controller.go:205] Got event on PV pvc-c57b61f8-77d8-4de8-91ee-1c5842909296
I0826 18:28:49.546209       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" with version 3895
I0826 18:28:49.546236       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: phase: Failed, bound to: "azuredisk-9267/pvc-zm2hj (uid: c57b61f8-77d8-4de8-91ee-1c5842909296)", boundByController: true
I0826 18:28:49.546533       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: volume is bound to claim azuredisk-9267/pvc-zm2hj
I0826 18:28:49.546673       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: claim azuredisk-9267/pvc-zm2hj not found
I0826 18:28:49.546796       1 pv_controller.go:1108] reclaimVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: policy is Delete
I0826 18:28:49.546923       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]]
I0826 18:28:49.547020       1 pv_controller.go:1765] operation "delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]" postponed due to exponential backoff
I0826 18:28:50.482905       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-58bbv"
... skipping 69 lines ...
I0826 18:28:54.279781       1 pv_controller.go:1040] claim "azuredisk-9267/pvc-mrnvx" status after binding: phase: Bound, bound to: "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6", bindCompleted: true, boundByController: true
I0826 18:28:54.279862       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 found: phase: Bound, bound to: "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a", bindCompleted: true, boundByController: true
I0826 18:28:54.279929       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: all is bound
I0826 18:28:54.279946       1 pv_controller.go:858] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: set phase Bound
I0826 18:28:54.279957       1 pv_controller.go:861] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: phase Bound already set
I0826 18:28:54.279974       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" with version 3895
I0826 18:28:54.280067       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: phase: Failed, bound to: "azuredisk-9267/pvc-zm2hj (uid: c57b61f8-77d8-4de8-91ee-1c5842909296)", boundByController: true
I0826 18:28:54.280097       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: volume is bound to claim azuredisk-9267/pvc-zm2hj
I0826 18:28:54.280141       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: claim azuredisk-9267/pvc-zm2hj not found
I0826 18:28:54.280164       1 pv_controller.go:1108] reclaimVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: policy is Delete
I0826 18:28:54.280269       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]]
I0826 18:28:54.280370       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" with version 3762
I0826 18:28:54.280480       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: phase: Bound, bound to: "azuredisk-9267/pvc-mrnvx (uid: 75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6)", boundByController: true
... skipping 2 lines ...
I0826 18:28:54.280401       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c57b61f8-77d8-4de8-91ee-1c5842909296] started
I0826 18:28:54.280667       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: all is bound
I0826 18:28:54.280736       1 pv_controller.go:858] updating PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: set phase Bound
I0826 18:28:54.280806       1 pv_controller.go:861] updating PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: phase Bound already set
I0826 18:28:54.291589       1 pv_controller.go:1340] isVolumeReleased[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: volume is released
I0826 18:28:54.291610       1 pv_controller.go:1404] doDeleteVolume [pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]
I0826 18:28:54.291650       1 pv_controller.go:1259] deletion of volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) since it's in attaching or detaching state
I0826 18:28:54.291668       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: set phase Failed
I0826 18:28:54.291679       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: phase Failed already set
E0826 18:28:54.291713       1 goroutinemap.go:150] Operation for "delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]" failed. No retries permitted until 2021-08-26 18:28:55.291689051 +0000 UTC m=+1575.150443864 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) since it's in attaching or detaching state
I0826 18:28:54.564633       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-58bbv ReadyCondition updated. Updating timestamp.
I0826 18:28:59.488632       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="71.499µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55392" resp=200
I0826 18:28:59.722593       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0826 18:29:01.018933       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) returned with <nil>
I0826 18:29:01.019377       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296) succeeded
I0826 18:29:01.019398       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296 was detached from node:capz-z3rmsd-md-0-58bbv
... skipping 7 lines ...
I0826 18:29:09.278192       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is bound to claim azuredisk-9267/pvc-x7xd6
I0826 18:29:09.278212       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 found: phase: Bound, bound to: "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a", bindCompleted: true, boundByController: true
I0826 18:29:09.278287       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: all is bound
I0826 18:29:09.278300       1 pv_controller.go:858] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: set phase Bound
I0826 18:29:09.278311       1 pv_controller.go:861] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: phase Bound already set
I0826 18:29:09.278331       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" with version 3895
I0826 18:29:09.278355       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: phase: Failed, bound to: "azuredisk-9267/pvc-zm2hj (uid: c57b61f8-77d8-4de8-91ee-1c5842909296)", boundByController: true
I0826 18:29:09.278381       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: volume is bound to claim azuredisk-9267/pvc-zm2hj
I0826 18:29:09.278401       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: claim azuredisk-9267/pvc-zm2hj not found
I0826 18:29:09.278409       1 pv_controller.go:1108] reclaimVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: policy is Delete
I0826 18:29:09.278426       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]]
I0826 18:29:09.278444       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" with version 3762
I0826 18:29:09.278470       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: phase: Bound, bound to: "azuredisk-9267/pvc-mrnvx (uid: 75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6)", boundByController: true
... skipping 45 lines ...
I0826 18:29:14.462012       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296
I0826 18:29:14.462047       1 pv_controller.go:1435] volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" deleted
I0826 18:29:14.462063       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: success
I0826 18:29:14.470229       1 pv_protection_controller.go:205] Got event on PV pvc-c57b61f8-77d8-4de8-91ee-1c5842909296
I0826 18:29:14.470259       1 pv_protection_controller.go:125] Processing PV pvc-c57b61f8-77d8-4de8-91ee-1c5842909296
I0826 18:29:14.470280       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" with version 3934
I0826 18:29:14.470317       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: phase: Failed, bound to: "azuredisk-9267/pvc-zm2hj (uid: c57b61f8-77d8-4de8-91ee-1c5842909296)", boundByController: true
I0826 18:29:14.470368       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: volume is bound to claim azuredisk-9267/pvc-zm2hj
I0826 18:29:14.470389       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: claim azuredisk-9267/pvc-zm2hj not found
I0826 18:29:14.470397       1 pv_controller.go:1108] reclaimVolume[pvc-c57b61f8-77d8-4de8-91ee-1c5842909296]: policy is Delete
I0826 18:29:14.470480       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c57b61f8-77d8-4de8-91ee-1c5842909296[d2c8dea9-c32b-4d70-8855-71ed30c7c847]]
I0826 18:29:14.470508       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c57b61f8-77d8-4de8-91ee-1c5842909296] started
I0826 18:29:14.474598       1 pv_controller.go:1243] Volume "pvc-c57b61f8-77d8-4de8-91ee-1c5842909296" is already being deleted
... skipping 45 lines ...
I0826 18:29:15.742342       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: claim azuredisk-9267/pvc-mrnvx not found
I0826 18:29:15.742346       1 pv_controller.go:1108] reclaimVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: policy is Delete
I0826 18:29:15.742353       1 pv_controller.go:1752] scheduleOperation[delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]]
I0826 18:29:15.742358       1 pv_controller.go:1763] operation "delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]" is already running, skipping
I0826 18:29:15.745463       1 pv_controller.go:1340] isVolumeReleased[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: volume is released
I0826 18:29:15.745481       1 pv_controller.go:1404] doDeleteVolume [pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]
I0826 18:29:15.745514       1 pv_controller.go:1259] deletion of volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6) since it's in attaching or detaching state
I0826 18:29:15.745528       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: set phase Failed
I0826 18:29:15.745538       1 pv_controller.go:858] updating PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: set phase Failed
I0826 18:29:15.748208       1 pv_protection_controller.go:205] Got event on PV pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6
I0826 18:29:15.748234       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" with version 3944
I0826 18:29:15.748274       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: phase: Failed, bound to: "azuredisk-9267/pvc-mrnvx (uid: 75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6)", boundByController: true
I0826 18:29:15.748317       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: volume is bound to claim azuredisk-9267/pvc-mrnvx
I0826 18:29:15.748355       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: claim azuredisk-9267/pvc-mrnvx not found
I0826 18:29:15.748363       1 pv_controller.go:1108] reclaimVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: policy is Delete
I0826 18:29:15.748373       1 pv_controller.go:1752] scheduleOperation[delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]]
I0826 18:29:15.748380       1 pv_controller.go:1763] operation "delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]" is already running, skipping
I0826 18:29:15.749057       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" with version 3944
I0826 18:29:15.749086       1 pv_controller.go:879] volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" entered phase "Failed"
I0826 18:29:15.749172       1 pv_controller.go:901] volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6) since it's in attaching or detaching state
E0826 18:29:15.749383       1 goroutinemap.go:150] Operation for "delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]" failed. No retries permitted until 2021-08-26 18:29:16.249343772 +0000 UTC m=+1596.108098685 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6) since it's in attaching or detaching state
I0826 18:29:15.749530       1 event.go:291] "Event occurred" object="pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6) since it's in attaching or detaching state"
I0826 18:29:16.506863       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6) returned with <nil>
I0826 18:29:16.506916       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6) succeeded
I0826 18:29:16.506948       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6 was detached from node:capz-z3rmsd-md-0-58bbv
I0826 18:29:16.506971       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6") on node "capz-z3rmsd-md-0-58bbv" 
I0826 18:29:16.546346       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a"
I0826 18:29:16.546384       1 azure_controller_standard.go:166] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a)
... skipping 7 lines ...
I0826 18:29:24.278787       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is bound to claim azuredisk-9267/pvc-x7xd6
I0826 18:29:24.278813       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 found: phase: Bound, bound to: "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a", bindCompleted: true, boundByController: true
I0826 18:29:24.278823       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: all is bound
I0826 18:29:24.278828       1 pv_controller.go:858] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: set phase Bound
I0826 18:29:24.278834       1 pv_controller.go:861] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: phase Bound already set
I0826 18:29:24.278844       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" with version 3944
I0826 18:29:24.278855       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: phase: Failed, bound to: "azuredisk-9267/pvc-mrnvx (uid: 75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6)", boundByController: true
I0826 18:29:24.278869       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: volume is bound to claim azuredisk-9267/pvc-mrnvx
I0826 18:29:24.278880       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: claim azuredisk-9267/pvc-mrnvx not found
I0826 18:29:24.278884       1 pv_controller.go:1108] reclaimVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: policy is Delete
I0826 18:29:24.278894       1 pv_controller.go:1752] scheduleOperation[delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]]
I0826 18:29:24.278913       1 pv_controller.go:1231] deleteVolumeOperation [pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6] started
I0826 18:29:24.279102       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-9267/pvc-x7xd6" with version 3753
... skipping 18 lines ...
I0826 18:29:29.539464       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6
I0826 18:29:29.539519       1 pv_controller.go:1435] volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" deleted
I0826 18:29:29.539753       1 pv_controller.go:1283] deleteVolumeOperation [pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: success
I0826 18:29:29.564912       1 pv_protection_controller.go:205] Got event on PV pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6
I0826 18:29:29.565237       1 pv_protection_controller.go:125] Processing PV pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6
I0826 18:29:29.564980       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" with version 3965
I0826 18:29:29.565429       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: phase: Failed, bound to: "azuredisk-9267/pvc-mrnvx (uid: 75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6)", boundByController: true
I0826 18:29:29.565494       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: volume is bound to claim azuredisk-9267/pvc-mrnvx
I0826 18:29:29.565516       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: claim azuredisk-9267/pvc-mrnvx not found
I0826 18:29:29.565551       1 pv_controller.go:1108] reclaimVolume[pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6]: policy is Delete
I0826 18:29:29.565603       1 pv_controller.go:1752] scheduleOperation[delete-pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6[6f6c9e04-c006-41ec-b6ce-1fcd1efd3b4f]]
I0826 18:29:29.565800       1 pv_controller.go:1231] deleteVolumeOperation [pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6] started
I0826 18:29:29.578907       1 pv_controller.go:1243] Volume "pvc-75167ef7-aaa2-4f2c-b216-fcd3e9ca3bc6" is already being deleted
... skipping 47 lines ...
I0826 18:29:31.800369       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 not found
I0826 18:29:31.800378       1 pv_controller.go:1108] reclaimVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: policy is Delete
I0826 18:29:31.800400       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]]
I0826 18:29:31.800407       1 pv_controller.go:1763] operation "delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]" is already running, skipping
I0826 18:29:31.801783       1 pv_controller.go:1340] isVolumeReleased[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is released
I0826 18:29:31.801802       1 pv_controller.go:1404] doDeleteVolume [pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]
I0826 18:29:31.801835       1 pv_controller.go:1259] deletion of volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a) since it's in attaching or detaching state
I0826 18:29:31.801870       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: set phase Failed
I0826 18:29:31.801880       1 pv_controller.go:858] updating PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: set phase Failed
I0826 18:29:31.804788       1 pv_protection_controller.go:205] Got event on PV pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a
I0826 18:29:31.804820       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" with version 3974
I0826 18:29:31.805097       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: phase: Failed, bound to: "azuredisk-9267/pvc-x7xd6 (uid: 4f0e453e-33b9-4eaa-b33e-f927320e760a)", boundByController: true
I0826 18:29:31.805259       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is bound to claim azuredisk-9267/pvc-x7xd6
I0826 18:29:31.805408       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 not found
I0826 18:29:31.805572       1 pv_controller.go:1108] reclaimVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: policy is Delete
I0826 18:29:31.805782       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]]
I0826 18:29:31.805896       1 pv_controller.go:1763] operation "delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]" is already running, skipping
I0826 18:29:31.806165       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" with version 3974
I0826 18:29:31.806188       1 pv_controller.go:879] volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" entered phase "Failed"
I0826 18:29:31.806421       1 pv_controller.go:901] volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a) since it's in attaching or detaching state
E0826 18:29:31.806480       1 goroutinemap.go:150] Operation for "delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]" failed. No retries permitted until 2021-08-26 18:29:32.306456025 +0000 UTC m=+1612.165210838 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a) since it's in attaching or detaching state
I0826 18:29:31.806832       1 event.go:291] "Event occurred" object="pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a) since it's in attaching or detaching state"
I0826 18:29:32.005842       1 azure_controller_standard.go:184] azureDisk - update(capz-z3rmsd): vm(capz-z3rmsd-md-0-58bbv) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a) returned with <nil>
I0826 18:29:32.005884       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a) succeeded
I0826 18:29:32.005894       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a was detached from node:capz-z3rmsd-md-0-58bbv
I0826 18:29:32.005920       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a") on node "capz-z3rmsd-md-0-58bbv" 
I0826 18:29:33.774676       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 11 items received
I0826 18:29:33.791206       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
... skipping 5 lines ...
I0826 18:29:33.900440       1 gc_controller.go:161] GC'ing orphaned
I0826 18:29:33.900464       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:29:34.570360       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-sq4fr ReadyCondition updated. Updating timestamp.
I0826 18:29:39.246242       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:29:39.279058       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:29:39.279195       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" with version 3974
I0826 18:29:39.279299       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: phase: Failed, bound to: "azuredisk-9267/pvc-x7xd6 (uid: 4f0e453e-33b9-4eaa-b33e-f927320e760a)", boundByController: true
I0826 18:29:39.279362       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is bound to claim azuredisk-9267/pvc-x7xd6
I0826 18:29:39.279387       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 not found
I0826 18:29:39.279408       1 pv_controller.go:1108] reclaimVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: policy is Delete
I0826 18:29:39.279425       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]]
I0826 18:29:39.279494       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a] started
I0826 18:29:39.285296       1 pv_controller.go:1340] isVolumeReleased[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is released
... skipping 2 lines ...
I0826 18:29:44.494239       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a
I0826 18:29:44.494565       1 pv_controller.go:1435] volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" deleted
I0826 18:29:44.494594       1 pv_controller.go:1283] deleteVolumeOperation [pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: success
I0826 18:29:44.508916       1 pv_protection_controller.go:205] Got event on PV pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a
I0826 18:29:44.508962       1 pv_protection_controller.go:125] Processing PV pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a
I0826 18:29:44.509233       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a" with version 3993
I0826 18:29:44.509321       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: phase: Failed, bound to: "azuredisk-9267/pvc-x7xd6 (uid: 4f0e453e-33b9-4eaa-b33e-f927320e760a)", boundByController: true
I0826 18:29:44.509536       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: volume is bound to claim azuredisk-9267/pvc-x7xd6
I0826 18:29:44.509580       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: claim azuredisk-9267/pvc-x7xd6 not found
I0826 18:29:44.509604       1 pv_controller.go:1108] reclaimVolume[pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a]: policy is Delete
I0826 18:29:44.509636       1 pv_controller.go:1752] scheduleOperation[delete-pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a[8c10b5d8-2c39-49c1-93c4-657bac129c08]]
I0826 18:29:44.509678       1 pv_controller.go:1231] deleteVolumeOperation [pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a] started
I0826 18:29:44.516268       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-4f0e453e-33b9-4eaa-b33e-f927320e760a
... skipping 10 lines ...
I0826 18:29:50.420970       1 publisher.go:186] Finished syncing namespace "azuredisk-493" (29.231258ms)
I0826 18:29:52.615347       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-493" (14.1µs)
I0826 18:29:52.751911       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7175" (14.767513ms)
I0826 18:29:52.752184       1 publisher.go:186] Finished syncing namespace "azuredisk-7175" (15.06801ms)
I0826 18:29:52.826948       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-9267
I0826 18:29:52.896959       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9267, name default-token-2kp25, uid b7657f5a-57d5-435a-b660-e1a8139e7e78, event type delete
E0826 18:29:52.913839       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9267/default: secrets "default-token-hxvgf" is forbidden: unable to create new content in namespace azuredisk-9267 because it is being terminated
I0826 18:29:52.924722       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9267, name azuredisk-volume-tester-snnkl.169eee99d7f2cd32, uid 23119121-290b-481b-b58b-ac56090ee3d5, event type delete
I0826 18:29:52.931364       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9267, name azuredisk-volume-tester-snnkl.169eee9c5c8f2769, uid e7e23703-3005-4bd2-9b8e-efe990efd335, event type delete
I0826 18:29:52.937259       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9267, name azuredisk-volume-tester-snnkl.169eee9ecf11bba9, uid f263726d-3593-44dc-bd67-b31c2c202ae0, event type delete
I0826 18:29:52.941477       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9267, name azuredisk-volume-tester-snnkl.169eeea1406d714a, uid 26f31513-c4d2-4be0-9774-62cd8ecec455, event type delete
I0826 18:29:52.945512       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9267, name azuredisk-volume-tester-snnkl.169eeea973cb238e, uid 0be71935-f64c-401a-a53e-61714a78f26f, event type delete
I0826 18:29:52.949700       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9267, name azuredisk-volume-tester-snnkl.169eeea9781d6dac, uid 9aedfda3-dfed-48b7-bb0d-91a49aeae1d3, event type delete
... skipping 48 lines ...
I0826 18:29:54.984224       1 pv_controller.go:1445] provisionClaim[azuredisk-7175/pvc-fhw55]: started
I0826 18:29:54.984233       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-7175/pvc-fhw55[6f03fdae-6b87-4223-aec8-6a22360169f9]]
I0826 18:29:54.984240       1 pv_controller.go:1763] operation "provision-azuredisk-7175/pvc-fhw55[6f03fdae-6b87-4223-aec8-6a22360169f9]" is already running, skipping
I0826 18:29:54.987915       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9 StorageAccountType:Standard_LRS Size:10
I0826 18:29:55.277115       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5802
I0826 18:29:55.311625       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5802, name default-token-v8n8k, uid accc6e2d-5b84-489b-87c7-f072bc4a9438, event type delete
E0826 18:29:55.326372       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5802/default: secrets "default-token-6s7px" is forbidden: unable to create new content in namespace azuredisk-5802 because it is being terminated
I0826 18:29:55.358518       1 tokens_controller.go:252] syncServiceAccount(azuredisk-5802/default), service account deleted, removing tokens
I0826 18:29:55.358579       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-5802, name default, uid a7eeddd1-0e45-454a-93f0-06ab62b3d5d4, event type delete
I0826 18:29:55.358618       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5802" (1.9µs)
I0826 18:29:55.428601       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-5802, name kube-root-ca.crt, uid 9bf2a84f-c690-4fc6-aff3-616d13ae4134, event type delete
I0826 18:29:55.432333       1 publisher.go:186] Finished syncing namespace "azuredisk-5802" (3.682754ms)
I0826 18:29:55.479059       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5802" (2.6µs)
... skipping 56 lines ...
I0826 18:29:57.347822       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-7175/pvc-fhw55] status: phase Bound already set
I0826 18:29:57.348038       1 pv_controller.go:1038] volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" bound to claim "azuredisk-7175/pvc-fhw55"
I0826 18:29:57.348190       1 pv_controller.go:1039] volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" status after binding: phase: Bound, bound to: "azuredisk-7175/pvc-fhw55 (uid: 6f03fdae-6b87-4223-aec8-6a22360169f9)", boundByController: true
I0826 18:29:57.348336       1 pv_controller.go:1040] claim "azuredisk-7175/pvc-fhw55" status after binding: phase: Bound, bound to: "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9", bindCompleted: true, boundByController: true
I0826 18:29:57.618519       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-493
I0826 18:29:57.681159       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-493, name default-token-nbjhs, uid 9529187f-69d3-4650-a50e-f168e3d328e1, event type delete
E0826 18:29:57.705980       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-493/default: secrets "default-token-zx6ml" is forbidden: unable to create new content in namespace azuredisk-493 because it is being terminated
I0826 18:29:57.787420       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-493, name kube-root-ca.crt, uid c33005df-dd93-4062-a3e4-2550faaf759a, event type delete
I0826 18:29:57.791495       1 publisher.go:186] Finished syncing namespace "azuredisk-493" (4.006649ms)
I0826 18:29:57.803380       1 tokens_controller.go:252] syncServiceAccount(azuredisk-493/default), service account deleted, removing tokens
I0826 18:29:57.803627       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-493, name default, uid 208ede75-2cda-448c-8c80-a6a9623e0487, event type delete
I0826 18:29:57.803659       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-493" (2.6µs)
I0826 18:29:57.815550       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-493, estimate: 0, errors: <nil>
... skipping 1076 lines ...
I0826 18:35:39.297620       1 pv_controller.go:1039] volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" status after binding: phase: Bound, bound to: "azuredisk-7175/pvc-fhw55 (uid: 6f03fdae-6b87-4223-aec8-6a22360169f9)", boundByController: true
I0826 18:35:39.297635       1 pv_controller.go:1040] claim "azuredisk-7175/pvc-fhw55" status after binding: phase: Bound, bound to: "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9", bindCompleted: true, boundByController: true
I0826 18:35:39.487988       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="76.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59278" resp=200
I0826 18:35:41.346758       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:41.501521       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:41.501551       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (165.657433ms)
E0826 18:35:41.501574       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:41.501762       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-7175" (2.7µs)
I0826 18:35:41.510050       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:41.659710       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:41.659764       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (152.743432ms)
E0826 18:35:41.659794       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:41.672066       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:41.798418       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 42 items received
I0826 18:35:41.827045       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:41.827084       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (157.202598ms)
E0826 18:35:41.827115       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:41.854075       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:42.033637       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.033685       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (185.459982ms)
E0826 18:35:42.033711       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.077536       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:42.250718       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.250775       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (175.968255ms)
E0826 18:35:42.250800       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.334379       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:42.504474       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.504501       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (172.861878ms)
E0826 18:35:42.504549       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.668858       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:42.824578       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:42.824630       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (159.405382ms)
E0826 18:35:42.824663       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:43.149109       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:43.315819       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:43.315848       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (170.709495ms)
E0826 18:35:43.315876       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:43.994010       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:44.204541       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:44.204567       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (248.605199ms)
E0826 18:35:44.204612       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:45.393939       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 23 items received
I0826 18:35:45.488204       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:45.664510       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:45.664539       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (179.191937ms)
E0826 18:35:45.664570       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:45.793036       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 3 items received
I0826 18:35:48.232091       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:48.387273       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:48.387302       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (162.391465ms)
E0826 18:35:48.387330       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:48.840105       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0826 18:35:49.487831       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="62.399µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59374" resp=200
I0826 18:35:53.522485       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:35:53.672235       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:53.672264       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (164.009152ms)
E0826 18:35:53.672289       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:35:53.917025       1 gc_controller.go:161] GC'ing orphaned
I0826 18:35:53.917060       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:35:54.221780       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:35:54.263090       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:35:54.297834       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:35:54.297933       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" with version 4067
... skipping 128 lines ...
I0826 18:36:01.393939       1 stateful_set_control.go:120] StatefulSet azuredisk-1528/azuredisk-volume-tester-f9499 revisions current=azuredisk-volume-tester-f9499-596d9bf7c6 update=azuredisk-volume-tester-f9499-596d9bf7c6
I0826 18:36:01.393953       1 stateful_set.go:477] Successfully synced StatefulSet azuredisk-1528/azuredisk-volume-tester-f9499 successful
I0826 18:36:01.394035       1 stateful_set.go:431] Finished syncing statefulset "azuredisk-1528/azuredisk-volume-tester-f9499" (1.978485ms)
I0826 18:36:03.924139       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:36:04.165510       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:36:04.165545       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (253.135176ms)
E0826 18:36:04.165578       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:36:07.636510       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-f9499-0"
I0826 18:36:07.638952       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-f9499-0, PodDisruptionBudget controller will avoid syncing.
I0826 18:36:07.639197       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-f9499-0"
I0826 18:36:07.638358       1 stateful_set.go:224] Pod azuredisk-volume-tester-f9499-0 updated, objectMeta {Name:azuredisk-volume-tester-f9499-0 GenerateName:azuredisk-volume-tester-f9499- Namespace:azuredisk-1528 SelfLink: UID:d6c2aa78-3829-4d63-af52-394ae19d54a5 ResourceVersion:4655 Generation:0 CreationTimestamp:2021-08-26 18:36:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5790752139526973902 controller-revision-hash:azuredisk-volume-tester-f9499-596d9bf7c6 statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-f9499-0] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-f9499 UID:d98feb48-74e6-44f6-9004-f970ecb35ab3 Controller:0xc00264045e BlockOwnerDeletion:0xc00264045f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-08-26 18:36:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d98feb48-74e6-44f6-9004-f970ecb35ab3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2021-08-26 18:36:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:azuredisk-volume-tester-f9499-0 GenerateName:azuredisk-volume-tester-f9499- Namespace:azuredisk-1528 SelfLink: UID:d6c2aa78-3829-4d63-af52-394ae19d54a5 ResourceVersion:4666 Generation:0 CreationTimestamp:2021-08-26 18:36:01 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-5790752139526973902 controller-revision-hash:azuredisk-volume-tester-f9499-596d9bf7c6 statefulset.kubernetes.io/pod-name:azuredisk-volume-tester-f9499-0] Annotations:map[cni.projectcalico.org/containerID:92dafa762080f3c818ca9f47acb05e155c006aa14859a8219355a7fa274f84db cni.projectcalico.org/podIP:192.168.65.8/32 cni.projectcalico.org/podIPs:192.168.65.8/32] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:azuredisk-volume-tester-f9499 UID:d98feb48-74e6-44f6-9004-f970ecb35ab3 Controller:0xc0024477fe BlockOwnerDeletion:0xc0024477ff}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-08-26 18:36:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:controller-revision-hash":{},"f:statefulset.kubernetes.io/pod-name":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d98feb48-74e6-44f6-9004-f970ecb35ab3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostname":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"pvc\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2021-08-26 18:36:01 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:calico Operation:Update APIVersion:v1 Time:2021-08-26 18:36:07 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]}.
I0826 18:36:07.642937       1 stateful_set.go:469] Syncing StatefulSet azuredisk-1528/azuredisk-volume-tester-f9499 with 1 pods
I0826 18:36:07.644794       1 stateful_set_control.go:376] StatefulSet azuredisk-1528/azuredisk-volume-tester-f9499 has 1 unhealthy Pods starting with azuredisk-volume-tester-f9499-0
... skipping 122 lines ...
I0826 18:36:24.300330       1 pv_controller.go:1038] volume "pvc-70747857-ca36-40e2-921b-69b558b2e18c" bound to claim "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0"
I0826 18:36:24.300373       1 pv_controller.go:1039] volume "pvc-70747857-ca36-40e2-921b-69b558b2e18c" status after binding: phase: Bound, bound to: "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0 (uid: 70747857-ca36-40e2-921b-69b558b2e18c)", boundByController: true
I0826 18:36:24.300389       1 pv_controller.go:1040] claim "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0" status after binding: phase: Bound, bound to: "pvc-70747857-ca36-40e2-921b-69b558b2e18c", bindCompleted: true, boundByController: true
I0826 18:36:24.650134       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7175
I0826 18:36:24.799958       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-7175, estimate: 15, errors: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:36:24.799984       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-7175" (153.412811ms)
E0826 18:36:24.800017       1 namespace_controller.go:162] deletion of namespace azuredisk-7175 failed: unexpected items still remain in namespace: azuredisk-7175 for gvr: /v1, Resource=pods
I0826 18:36:26.810785       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 5 items received
I0826 18:36:27.106291       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0826 18:36:29.487762       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="69.699µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59750" resp=200
I0826 18:36:30.080254       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0826 18:36:33.221645       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="azuredisk-7175/azuredisk-volume-tester-h9c5q" podUID=488ed6ec-672f-4252-84f1-c194dbbe01c6
I0826 18:36:33.221696       1 pvc_protection_controller.go:156] "Processing PVC" PVC="azuredisk-7175/pvc-fhw55"
... skipping 38 lines ...
I0826 18:36:33.258377       1 pv_controller.go:1108] reclaimVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: policy is Delete
I0826 18:36:33.258387       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]]
I0826 18:36:33.258398       1 pv_controller.go:1763] operation "delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]" is already running, skipping
I0826 18:36:33.258483       1 pv_controller.go:1231] deleteVolumeOperation [pvc-6f03fdae-6b87-4223-aec8-6a22360169f9] started
I0826 18:36:33.260458       1 pv_controller.go:1340] isVolumeReleased[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: volume is released
I0826 18:36:33.260474       1 pv_controller.go:1404] doDeleteVolume [pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]
I0826 18:36:33.260518       1 pv_controller.go:1259] deletion of volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9) since it's in attaching or detaching state
I0826 18:36:33.260525       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: set phase Failed
I0826 18:36:33.260531       1 pv_controller.go:858] updating PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: set phase Failed
I0826 18:36:33.263637       1 pv_protection_controller.go:205] Got event on PV pvc-6f03fdae-6b87-4223-aec8-6a22360169f9
I0826 18:36:33.263688       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" with version 4712
I0826 18:36:33.263729       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: phase: Failed, bound to: "azuredisk-7175/pvc-fhw55 (uid: 6f03fdae-6b87-4223-aec8-6a22360169f9)", boundByController: true
I0826 18:36:33.263762       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: volume is bound to claim azuredisk-7175/pvc-fhw55
I0826 18:36:33.263782       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: claim azuredisk-7175/pvc-fhw55 not found
I0826 18:36:33.263789       1 pv_controller.go:1108] reclaimVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: policy is Delete
I0826 18:36:33.263800       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]]
I0826 18:36:33.263806       1 pv_controller.go:1763] operation "delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]" is already running, skipping
I0826 18:36:33.264569       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" with version 4712
I0826 18:36:33.264596       1 pv_controller.go:879] volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" entered phase "Failed"
I0826 18:36:33.264626       1 pv_controller.go:901] volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9) since it's in attaching or detaching state
E0826 18:36:33.264950       1 goroutinemap.go:150] Operation for "delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]" failed. No retries permitted until 2021-08-26 18:36:33.764869375 +0000 UTC m=+2033.623624288 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9) since it's in attaching or detaching state
I0826 18:36:33.264988       1 event.go:291] "Event occurred" object="pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9) since it's in attaching or detaching state"
I0826 18:36:33.917920       1 gc_controller.go:161] GC'ing orphaned
I0826 18:36:33.917960       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0826 18:36:34.402077       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-z3rmsd-md-0-sq4fr"
I0826 18:36:34.655977       1 node_lifecycle_controller.go:1047] Node capz-z3rmsd-md-0-sq4fr ReadyCondition updated. Updating timestamp.
I0826 18:36:35.171465       1 resource_quota_monitor.go:355] QuotaMonitor process object: apps/v1, Resource=statefulsets, namespace azuredisk-1528, name azuredisk-volume-tester-f9499, uid d98feb48-74e6-44f6-9004-f970ecb35ab3, event type delete
I0826 18:36:35.171574       1 stateful_set.go:440] StatefulSet has been deleted azuredisk-1528/azuredisk-volume-tester-f9499
... skipping 12 lines ...
I0826 18:36:38.025828       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3612" (3.799µs)
I0826 18:36:38.155613       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5786" (13.814195ms)
I0826 18:36:38.156088       1 publisher.go:186] Finished syncing namespace "azuredisk-5786" (14.317791ms)
I0826 18:36:39.265777       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0826 18:36:39.300152       1 pv_controller_base.go:528] resyncing PV controller
I0826 18:36:39.300235       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" with version 4712
I0826 18:36:39.300273       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: phase: Failed, bound to: "azuredisk-7175/pvc-fhw55 (uid: 6f03fdae-6b87-4223-aec8-6a22360169f9)", boundByController: true
I0826 18:36:39.300308       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: volume is bound to claim azuredisk-7175/pvc-fhw55
I0826 18:36:39.300326       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: claim azuredisk-7175/pvc-fhw55 not found
I0826 18:36:39.300334       1 pv_controller.go:1108] reclaimVolume[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: policy is Delete
I0826 18:36:39.300350       1 pv_controller.go:1752] scheduleOperation[delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]]
I0826 18:36:39.300368       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-70747857-ca36-40e2-921b-69b558b2e18c" with version 4528
I0826 18:36:39.300391       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-70747857-ca36-40e2-921b-69b558b2e18c]: phase: Bound, bound to: "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0 (uid: 70747857-ca36-40e2-921b-69b558b2e18c)", boundByController: true
... skipping 18 lines ...
I0826 18:36:39.300718       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0] status: phase Bound already set
I0826 18:36:39.300729       1 pv_controller.go:1038] volume "pvc-70747857-ca36-40e2-921b-69b558b2e18c" bound to claim "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0"
I0826 18:36:39.300747       1 pv_controller.go:1039] volume "pvc-70747857-ca36-40e2-921b-69b558b2e18c" status after binding: phase: Bound, bound to: "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0 (uid: 70747857-ca36-40e2-921b-69b558b2e18c)", boundByController: true
I0826 18:36:39.300762       1 pv_controller.go:1040] claim "azuredisk-1528/pvc-azuredisk-volume-tester-f9499-0" status after binding: phase: Bound, bound to: "pvc-70747857-ca36-40e2-921b-69b558b2e18c", bindCompleted: true, boundByController: true
I0826 18:36:39.305555       1 pv_controller.go:1340] isVolumeReleased[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: volume is released
I0826 18:36:39.305576       1 pv_controller.go:1404] doDeleteVolume [pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]
I0826 18:36:39.305612       1 pv_controller.go:1259] deletion of volume "pvc-6f03fdae-6b87-4223-aec8-6a22360169f9" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9) since it's in attaching or detaching state
I0826 18:36:39.305630       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: set phase Failed
I0826 18:36:39.305645       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-6f03fdae-6b87-4223-aec8-6a22360169f9]: phase Failed already set
E0826 18:36:39.305667       1 goroutinemap.go:150] Operation for "delete-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9[8692e7d1-0755-4472-99aa-313717f2cb27]" failed. No retries permitted until 2021-08-26 18:36:40.305651653 +0000 UTC m=+2040.164406466 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-z3rmsd/providers/Microsoft.Compute/disks/capz-z3rmsd-dynamic-pvc-6f03fdae-6b87-4223-aec8-6a22360169f9) since it's in attaching or detaching state
I0826 18:36:39.488584       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="63.099µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59862" resp=200
I0826 18:36:40.338596       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-5786" (4µs)
I0826 18:36:40.642706       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1528
I0826 18:36:40.666091       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1528, name azuredisk-volume-tester-f9499-0.169eef014486e4db, uid 8f7d3a8e-9ed9-4caa-8f46-f48806aeeac4, event type delete
I0826 18:36:40.679778       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1528, name azuredisk-volume-tester-f9499-0.169eef03c027f690, uid be195615-b512-40e5-b2bc-af5be0a80ad2, event type delete
I0826 18:36:40.682535       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1528, name azuredisk-volume-tester-f9499-0.169eef05b37f7254, uid 49f5a173-0a9e-4148-baeb-a4c7c07a14f6, event type delete
... skipping 43 lines ...
I0826 18:36:40.910533       1 namespace_controller.go:157] Content remaining in namespace azuredisk-1528, waiting 16 seconds
I0826 18:36:41.063341       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 5 items received
I0826 18:36:43.033454       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3612
I0826 18:36:43.075775       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3612, name kube-root-ca.crt, uid 0ef8141d-a6e9-454c-a9f8-1219260b79c2, event type delete
I0826 18:36:43.079329       1 publisher.go:186] Finished syncing namespace "azuredisk-3612" (3.307775ms)
I0826 18:36:43.085971       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3612, name default-token-dqnbr, uid f0a0164b-469a-4187-8262-af004756d09a, event type delete
E0826 18:36:43.102228       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3612/default: secrets "default-token-wdtj9" is forbidden: unable to create new content in namespace azuredisk-3612 because it is being terminated
I0826 18:36:43.173554       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3612/default), service account deleted, removing tokens
I0826 18:36:43.173894       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3612, name default, uid 8dbb2c78-91f9-41be-b3b0-b9309f93b51d, event type delete
I0826 18:36:43.173946       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3612" (2.4µs)
I0826 18:36:43.198893       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3612, estimate: 0, errors: <nil>
I0826 18:36:43.199180       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3612" (2.8µs)
I0826 18:36:43.220601       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3612" (194.362927ms)
2021/08/26 18:36:44 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [single-az] [It] should detach disk after pod deleted [disk.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:735

Ran 12 of 53 Specs in 1725.819 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 41 Skipped
You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce (a small number of) breaking changes.
To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md
To comment, chime in at https://github.com/onsi/ginkgo/issues/711

... skipping 2 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.4

--- FAIL: TestE2E (1725.83s)
FAIL
FAIL	sigs.k8s.io/azuredisk-csi-driver/test/e2e	1725.874s
FAIL
make: *** [Makefile:254: e2e-test] Error 1
================ DUMPING LOGS FOR MANAGEMENT CLUSTER ================
Exported logs for cluster "capz" to:
/logs/artifacts/management-cluster
================ DUMPING LOGS FOR WORKLOAD CLUSTER ================
Deploying log-dump-daemonset
daemonset.apps/log-dump-node created
... skipping 24 lines ...