This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnilo19: Automated cherry pick of #105839: fix: remove VMSS and VMSS instances from SLB backend pool
ResultABORTED
Tests 0 failed / 9 succeeded
Started2021-11-02 05:01
Elapsed53m51s
Revision54163e6d2ceb91e55ae0bd6a65335ef425c28005
Refs 106072

No Test Failures!


Show 9 Passed Tests

Show 44 Skipped Tests

Error lines from build-log.txt

... skipping 255 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160
STEP: Creating a kubernetes client
Nov  2 05:31:14.153: INFO: >>> kubeConfig: /root/tmp555149321/kubeconfig/kubeconfig.uksouth.json
STEP: Building a namespace api object, basename azuredisk
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-3274
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [1.026 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
... skipping 85 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov  2 05:31:20.651: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-p7rc6" in namespace "azuredisk-5356" to be "Succeeded or Failed"
Nov  2 05:31:20.770: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Pending", Reason="", readiness=false. Elapsed: 119.736408ms
Nov  2 05:31:22.874: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222829097s
Nov  2 05:31:24.981: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.330503622s
Nov  2 05:31:27.086: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435106137s
Nov  2 05:31:29.189: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53859907s
Nov  2 05:31:31.294: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.643470215s
... skipping 21 lines ...
Nov  2 05:32:17.621: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Running", Reason="", readiness=true. Elapsed: 56.970144624s
Nov  2 05:32:19.728: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Running", Reason="", readiness=true. Elapsed: 59.077457463s
Nov  2 05:32:21.831: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.180798309s
Nov  2 05:32:23.935: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.284201955s
Nov  2 05:32:26.039: INFO: Pod "azuredisk-volume-tester-p7rc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m5.388031907s
STEP: Saw pod success
Nov  2 05:32:26.039: INFO: Pod "azuredisk-volume-tester-p7rc6" satisfied condition "Succeeded or Failed"
Nov  2 05:32:26.039: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-p7rc6"
Nov  2 05:32:26.198: INFO: Pod azuredisk-volume-tester-p7rc6 has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-p7rc6 in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 05:32:26.556: INFO: deleting PVC "azuredisk-5356"/"pvc-snrnj"
Nov  2 05:32:26.556: INFO: Deleting PersistentVolumeClaim "pvc-snrnj"
STEP: waiting for claim's PV "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" to be deleted
Nov  2 05:32:26.660: INFO: Waiting up to 10m0s for PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 to get deleted
Nov  2 05:32:26.763: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Released (102.859296ms)
Nov  2 05:32:31.866: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Failed (5.206550907s)
Nov  2 05:32:36.972: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Failed (10.311874464s)
Nov  2 05:32:42.075: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Failed (15.415148252s)
Nov  2 05:32:47.179: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Failed (20.519221596s)
Nov  2 05:32:52.290: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Failed (25.629694761s)
Nov  2 05:32:57.393: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 found and phase=Failed (30.733582742s)
Nov  2 05:33:02.499: INFO: PersistentVolume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 was removed
Nov  2 05:33:02.499: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Nov  2 05:33:02.601: INFO: Claim "azuredisk-5356" in namespace "pvc-snrnj" doesn't exist in the system
Nov  2 05:33:02.601: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-btrnk
Nov  2 05:33:02.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 134 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Nov  2 05:33:14.238: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-v5n5j" in namespace "azuredisk-2451" to be "Error status code"
Nov  2 05:33:14.346: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 108.135126ms
Nov  2 05:33:16.450: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211896002s
Nov  2 05:33:18.553: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 4.315269175s
Nov  2 05:33:20.656: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418169984s
Nov  2 05:33:22.759: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 8.52122941s
Nov  2 05:33:24.863: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 10.625255296s
... skipping 10 lines ...
Nov  2 05:33:50.594: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 36.356465307s
Nov  2 05:33:52.698: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 38.460451875s
Nov  2 05:33:54.802: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 40.563861448s
Nov  2 05:33:56.905: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Pending", Reason="", readiness=false. Elapsed: 42.667036526s
Nov  2 05:33:59.008: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Running", Reason="", readiness=true. Elapsed: 44.770448192s
Nov  2 05:34:01.113: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Running", Reason="", readiness=true. Elapsed: 46.875140778s
Nov  2 05:34:03.217: INFO: Pod "azuredisk-volume-tester-v5n5j": Phase="Failed", Reason="", readiness=false. Elapsed: 48.979673311s
STEP: Saw pod failure
Nov  2 05:34:03.218: INFO: Pod "azuredisk-volume-tester-v5n5j" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Nov  2 05:34:03.360: INFO: deleting Pod "azuredisk-2451"/"azuredisk-volume-tester-v5n5j"
Nov  2 05:34:03.487: INFO: Pod azuredisk-volume-tester-v5n5j has the following logs: out-file : Access to the path 'C:\mnt\test-1\data' is denied.
At line:1 char:1
+ echo $null >> C:\mnt\test-1\data
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 7 lines ...
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 05:34:03.812: INFO: deleting PVC "azuredisk-2451"/"pvc-smqjq"
Nov  2 05:34:03.812: INFO: Deleting PersistentVolumeClaim "pvc-smqjq"
STEP: waiting for claim's PV "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" to be deleted
Nov  2 05:34:03.917: INFO: Waiting up to 10m0s for PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 to get deleted
Nov  2 05:34:04.094: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (177.321753ms)
Nov  2 05:34:09.198: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (5.280496871s)
Nov  2 05:34:14.300: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (10.38327368s)
Nov  2 05:34:19.408: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (15.490629108s)
Nov  2 05:34:24.515: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (20.597627533s)
Nov  2 05:34:29.619: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (25.701529049s)
Nov  2 05:34:34.722: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (30.804850081s)
Nov  2 05:34:39.826: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 found and phase=Failed (35.908822587s)
Nov  2 05:34:44.935: INFO: PersistentVolume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 was removed
Nov  2 05:34:44.935: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2451 to be removed
Nov  2 05:34:45.037: INFO: Claim "azuredisk-2451" in namespace "pvc-smqjq" doesn't exist in the system
Nov  2 05:34:45.037: INFO: deleting StorageClass azuredisk-2451-kubernetes.io-azure-disk-dynamic-sc-z5f6d
Nov  2 05:34:45.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2451" for this suite.
... skipping 48 lines ...
Nov  2 05:37:05.541: INFO: deleting PVC "azuredisk-9828"/"pvc-6lm4x"
Nov  2 05:37:05.542: INFO: Deleting PersistentVolumeClaim "pvc-6lm4x"
STEP: waiting for claim's PV "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" to be deleted
Nov  2 05:37:05.645: INFO: Waiting up to 10m0s for PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b to get deleted
Nov  2 05:37:05.765: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Bound (119.713971ms)
Nov  2 05:37:10.870: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Bound (5.224883794s)
Nov  2 05:37:15.973: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Failed (10.32768894s)
Nov  2 05:37:21.077: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Failed (15.432021532s)
Nov  2 05:37:26.181: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Failed (20.535704899s)
Nov  2 05:37:31.286: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Failed (25.640329075s)
Nov  2 05:37:36.391: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Failed (30.745381588s)
Nov  2 05:37:41.495: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b found and phase=Failed (35.849224159s)
Nov  2 05:37:46.599: INFO: PersistentVolume pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b was removed
Nov  2 05:37:46.599: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9828 to be removed
Nov  2 05:37:46.701: INFO: Claim "azuredisk-9828" in namespace "pvc-6lm4x" doesn't exist in the system
Nov  2 05:37:46.701: INFO: deleting StorageClass azuredisk-9828-kubernetes.io-azure-disk-dynamic-sc-j4nq5
Nov  2 05:37:46.805: INFO: deleting Pod "azuredisk-9828"/"azuredisk-volume-tester-xtlxb"
Nov  2 05:37:46.949: INFO: Pod azuredisk-volume-tester-xtlxb has the following logs: 
... skipping 2 lines ...
STEP: checking the PV
Nov  2 05:37:47.258: INFO: deleting PVC "azuredisk-9828"/"pvc-jssk6"
Nov  2 05:37:47.258: INFO: Deleting PersistentVolumeClaim "pvc-jssk6"
STEP: waiting for claim's PV "pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" to be deleted
Nov  2 05:37:47.362: INFO: Waiting up to 10m0s for PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b to get deleted
Nov  2 05:37:47.464: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b found and phase=Bound (102.482512ms)
Nov  2 05:37:52.567: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b found and phase=Failed (5.205556997s)
Nov  2 05:37:57.671: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b found and phase=Failed (10.309505289s)
Nov  2 05:38:02.775: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b found and phase=Failed (15.413142627s)
Nov  2 05:38:07.878: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b found and phase=Failed (20.516224346s)
Nov  2 05:38:12.981: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b found and phase=Failed (25.619436834s)
Nov  2 05:38:18.085: INFO: PersistentVolume pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b was removed
Nov  2 05:38:18.085: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9828 to be removed
Nov  2 05:38:18.187: INFO: Claim "azuredisk-9828" in namespace "pvc-jssk6" doesn't exist in the system
Nov  2 05:38:18.187: INFO: deleting StorageClass azuredisk-9828-kubernetes.io-azure-disk-dynamic-sc-896s4
Nov  2 05:38:18.290: INFO: deleting Pod "azuredisk-9828"/"azuredisk-volume-tester-fpcmq"
Nov  2 05:38:18.415: INFO: Pod azuredisk-volume-tester-fpcmq has the following logs: 
... skipping 2 lines ...
STEP: checking the PV
Nov  2 05:38:18.724: INFO: deleting PVC "azuredisk-9828"/"pvc-k4stb"
Nov  2 05:38:18.724: INFO: Deleting PersistentVolumeClaim "pvc-k4stb"
STEP: waiting for claim's PV "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" to be deleted
Nov  2 05:38:18.829: INFO: Waiting up to 10m0s for PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 to get deleted
Nov  2 05:38:18.932: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Bound (102.964888ms)
Nov  2 05:38:24.036: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (5.206746804s)
Nov  2 05:38:29.140: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (10.31060273s)
Nov  2 05:38:34.246: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (15.417509505s)
Nov  2 05:38:39.353: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (20.52378154s)
Nov  2 05:38:44.456: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (25.627037844s)
Nov  2 05:38:49.564: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (30.735352353s)
Nov  2 05:38:54.671: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 found and phase=Failed (35.842400881s)
Nov  2 05:38:59.775: INFO: PersistentVolume pvc-a7576988-640b-44f5-be10-0c2dd6754b25 was removed
Nov  2 05:38:59.775: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9828 to be removed
Nov  2 05:38:59.877: INFO: Claim "azuredisk-9828" in namespace "pvc-k4stb" doesn't exist in the system
Nov  2 05:38:59.877: INFO: deleting StorageClass azuredisk-9828-kubernetes.io-azure-disk-dynamic-sc-pnvwx
Nov  2 05:38:59.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9828" for this suite.
... skipping 68 lines ...
STEP: checking the PV
Nov  2 05:41:19.624: INFO: deleting PVC "azuredisk-1563"/"pvc-9gz5b"
Nov  2 05:41:19.624: INFO: Deleting PersistentVolumeClaim "pvc-9gz5b"
STEP: waiting for claim's PV "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" to be deleted
Nov  2 05:41:19.728: INFO: Waiting up to 10m0s for PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 to get deleted
Nov  2 05:41:19.861: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Bound (133.000986ms)
Nov  2 05:41:24.965: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (5.236858762s)
Nov  2 05:41:30.067: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (10.3395545s)
Nov  2 05:41:35.172: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (15.444317346s)
Nov  2 05:41:40.277: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (20.548791107s)
Nov  2 05:41:45.380: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (25.65255619s)
Nov  2 05:41:50.485: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (30.756843631s)
Nov  2 05:41:55.589: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 found and phase=Failed (35.861179754s)
Nov  2 05:42:00.692: INFO: PersistentVolume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 was removed
Nov  2 05:42:00.692: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1563 to be removed
Nov  2 05:42:00.794: INFO: Claim "azuredisk-1563" in namespace "pvc-9gz5b" doesn't exist in the system
Nov  2 05:42:00.794: INFO: deleting StorageClass azuredisk-1563-kubernetes.io-azure-disk-dynamic-sc-7mbpj
Nov  2 05:42:00.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1563" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov  2 05:42:28.196: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ph8r6" in namespace "azuredisk-552" to be "Succeeded or Failed"
Nov  2 05:42:28.308: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 111.508223ms
Nov  2 05:42:30.411: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214576768s
Nov  2 05:42:32.516: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319286604s
Nov  2 05:42:34.620: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423363402s
Nov  2 05:42:36.727: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.530807079s
Nov  2 05:42:38.831: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.635060828s
... skipping 22 lines ...
Nov  2 05:43:27.221: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 59.02489375s
Nov  2 05:43:29.326: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.129366707s
Nov  2 05:43:31.429: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.232628339s
Nov  2 05:43:33.534: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Running", Reason="", readiness=true. Elapsed: 1m5.337325416s
Nov  2 05:43:35.641: INFO: Pod "azuredisk-volume-tester-ph8r6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m7.445182129s
STEP: Saw pod success
Nov  2 05:43:35.642: INFO: Pod "azuredisk-volume-tester-ph8r6" satisfied condition "Succeeded or Failed"
Nov  2 05:43:35.642: INFO: deleting Pod "azuredisk-552"/"azuredisk-volume-tester-ph8r6"
Nov  2 05:43:35.881: INFO: Pod azuredisk-volume-tester-ph8r6 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-ph8r6 in namespace azuredisk-552
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 05:43:36.206: INFO: deleting PVC "azuredisk-552"/"pvc-b8cd7"
Nov  2 05:43:36.206: INFO: Deleting PersistentVolumeClaim "pvc-b8cd7"
STEP: waiting for claim's PV "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" to be deleted
Nov  2 05:43:36.321: INFO: Waiting up to 10m0s for PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 to get deleted
Nov  2 05:43:36.422: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (101.578656ms)
Nov  2 05:43:41.525: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (5.204706002s)
Nov  2 05:43:46.628: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (10.307508686s)
Nov  2 05:43:51.732: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (15.411216377s)
Nov  2 05:43:56.845: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (20.524230788s)
Nov  2 05:44:01.950: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (25.628906491s)
Nov  2 05:44:07.054: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (30.732796581s)
Nov  2 05:44:12.159: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (35.838273254s)
Nov  2 05:44:17.263: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (40.941801124s)
Nov  2 05:44:22.366: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (46.045307557s)
Nov  2 05:44:27.471: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 found and phase=Failed (51.150719487s)
Nov  2 05:44:32.581: INFO: PersistentVolume pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 was removed
Nov  2 05:44:32.581: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-552 to be removed
Nov  2 05:44:32.683: INFO: Claim "azuredisk-552" in namespace "pvc-b8cd7" doesn't exist in the system
Nov  2 05:44:32.683: INFO: deleting StorageClass azuredisk-552-kubernetes.io-azure-disk-dynamic-sc-s4pv8
STEP: validating provisioned PV
STEP: checking the PV
... skipping 137 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov  2 05:45:04.087: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-zb6xz" in namespace "azuredisk-9267" to be "Succeeded or Failed"
Nov  2 05:45:04.199: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 112.433096ms
Nov  2 05:45:06.303: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21592424s
Nov  2 05:45:08.407: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 4.320631195s
Nov  2 05:45:10.512: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425429335s
Nov  2 05:45:12.616: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529492713s
Nov  2 05:45:14.720: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632962617s
... skipping 22 lines ...
Nov  2 05:46:03.113: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 59.025961102s
Nov  2 05:46:05.217: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.129816939s
Nov  2 05:46:07.319: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.23261448s
Nov  2 05:46:09.423: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Running", Reason="", readiness=true. Elapsed: 1m5.336346759s
Nov  2 05:46:11.527: INFO: Pod "azuredisk-volume-tester-zb6xz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m7.440565112s
STEP: Saw pod success
Nov  2 05:46:11.527: INFO: Pod "azuredisk-volume-tester-zb6xz" satisfied condition "Succeeded or Failed"
Nov  2 05:46:11.527: INFO: deleting Pod "azuredisk-9267"/"azuredisk-volume-tester-zb6xz"
Nov  2 05:46:11.790: INFO: Pod azuredisk-volume-tester-zb6xz has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-zb6xz in namespace azuredisk-9267
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 05:46:12.114: INFO: deleting PVC "azuredisk-9267"/"pvc-rgp9p"
Nov  2 05:46:12.114: INFO: Deleting PersistentVolumeClaim "pvc-rgp9p"
STEP: waiting for claim's PV "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" to be deleted
Nov  2 05:46:12.233: INFO: Waiting up to 10m0s for PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 to get deleted
Nov  2 05:46:12.335: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (101.653234ms)
Nov  2 05:46:17.438: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (5.20498668s)
Nov  2 05:46:22.543: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (10.309925707s)
Nov  2 05:46:27.646: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (15.413265115s)
Nov  2 05:46:32.750: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (20.517037587s)
Nov  2 05:46:37.853: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (25.61960807s)
Nov  2 05:46:42.956: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (30.722964394s)
Nov  2 05:46:48.063: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (35.829626968s)
Nov  2 05:46:53.169: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (40.936135464s)
Nov  2 05:46:58.276: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (46.043286026s)
Nov  2 05:47:03.384: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (51.150853827s)
Nov  2 05:47:08.488: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (56.254542569s)
Nov  2 05:47:13.594: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (1m1.360639047s)
Nov  2 05:47:18.700: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (1m6.467223708s)
Nov  2 05:47:23.807: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 found and phase=Failed (1m11.573412559s)
Nov  2 05:47:28.909: INFO: PersistentVolume pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 was removed
Nov  2 05:47:28.909: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9267 to be removed
Nov  2 05:47:29.013: INFO: Claim "azuredisk-9267" in namespace "pvc-rgp9p" doesn't exist in the system
Nov  2 05:47:29.013: INFO: deleting StorageClass azuredisk-9267-kubernetes.io-azure-disk-dynamic-sc-2x6s8
STEP: validating provisioned PV
STEP: checking the PV
... skipping 470 lines ...
I1102 05:27:18.911582       1 azure_securitygroupclient.go:63] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=100, bucket=1000
I1102 05:27:18.911588       1 azure_securitygroupclient.go:66] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000
I1102 05:27:18.911597       1 azure_publicipclient.go:63] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=100, bucket=1000
I1102 05:27:18.911602       1 azure_publicipclient.go:66] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000
I1102 05:27:18.911650       1 azure.go:741] Setting up informers for Azure cloud provider
I1102 05:27:18.914211       1 shared_informer.go:240] Waiting for caches to sync for tokens
W1102 05:27:19.008926       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I1102 05:27:19.008955       1 controllermanager.go:562] Starting "attachdetach"
I1102 05:27:19.015402       1 shared_informer.go:247] Caches are synced for tokens 
W1102 05:27:19.021617       1 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I1102 05:27:19.022978       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I1102 05:27:19.023006       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I1102 05:27:19.023018       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
... skipping 6 lines ...
I1102 05:27:19.023116       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I1102 05:27:19.023140       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I1102 05:27:19.023246       1 controllermanager.go:577] Started "attachdetach"
I1102 05:27:19.023256       1 controllermanager.go:562] Starting "pv-protection"
I1102 05:27:19.023375       1 attach_detach_controller.go:328] Starting attach detach controller
I1102 05:27:19.023392       1 shared_informer.go:240] Waiting for caches to sync for attach detach
W1102 05:27:19.023486       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-76635945-0" does not exist
I1102 05:27:19.095518       1 controllermanager.go:577] Started "pv-protection"
I1102 05:27:19.095540       1 controllermanager.go:562] Starting "serviceaccount"
I1102 05:27:19.095629       1 pv_protection_controller.go:83] Starting PV protection controller
I1102 05:27:19.095639       1 shared_informer.go:240] Waiting for caches to sync for PV protection
I1102 05:27:19.132216       1 controllermanager.go:577] Started "serviceaccount"
I1102 05:27:19.132240       1 controllermanager.go:562] Starting "daemonset"
... skipping 251 lines ...
I1102 05:27:24.078761       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1102 05:27:28.477464       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:27:33.478120       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:27:38.481099       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:27:41.849960       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-787d7f6757" need=1 creating=1
I1102 05:27:41.850390       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d7f6757 to 1"
I1102 05:27:41.921587       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I1102 05:27:41.966291       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-87b67c5fd" need=1 creating=1
I1102 05:27:41.971745       1 event.go:291] "Event occurred" object="kube-system/coredns-autoscaler" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-87b67c5fd to 1"
I1102 05:27:42.190446       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again"
I1102 05:27:42.247298       1 event.go:291] "Event occurred" object="kube-system/coredns-autoscaler-87b67c5fd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-87b67c5fd-zm8g6"
I1102 05:27:42.252289       1 event.go:291] "Event occurred" object="kube-system/coredns-787d7f6757" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d7f6757-96f78"
I1102 05:27:43.481751       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:27:44.709124       1 event.go:291] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-w4dbg"
I1102 05:27:44.742628       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f2xkn"
I1102 05:27:48.482566       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:27:50.727897       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/metrics-server-67b6bdd56c" need=1 creating=1
I1102 05:27:50.728574       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-67b6bdd56c to 1"
I1102 05:27:50.762053       1 event.go:291] "Event occurred" object="kube-system/metrics-server-67b6bdd56c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-67b6bdd56c-ljwpn"
I1102 05:27:50.787860       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I1102 05:27:53.483074       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
E1102 05:27:53.707183       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1102 05:27:54.078776       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1102 05:27:58.483870       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:03.484502       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:08.487626       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:13.488682       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:18.489537       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:23.490179       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
E1102 05:28:23.724051       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1102 05:28:24.097773       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1102 05:28:28.490734       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:33.491751       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:38.492858       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:43.493721       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:48.494811       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:53.496030       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:28:58.496764       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:29:03.497615       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:29:08.498583       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
W1102 05:29:12.331357       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="7663k8s000" does not exist
I1102 05:29:12.365842       1 ttl_controller.go:276] "Changed ttl annotation" node="7663k8s000" new_ttl="0s"
W1102 05:29:12.938631       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="7663k8s001" does not exist
I1102 05:29:12.960578       1 ttl_controller.go:276] "Changed ttl annotation" node="7663k8s001" new_ttl="0s"
I1102 05:29:13.499762       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: uksouth::0
I1102 05:29:13.499801       1 node_lifecycle_controller.go:770] Controller observed a new Node: "7663k8s000"
I1102 05:29:13.499815       1 controller_utils.go:172] Recording Registered Node 7663k8s000 in Controller event message for node 7663k8s000
I1102 05:29:13.500003       1 node_lifecycle_controller.go:770] Controller observed a new Node: "7663k8s001"
I1102 05:29:13.500019       1 controller_utils.go:172] Recording Registered Node 7663k8s001 in Controller event message for node 7663k8s001
... skipping 13 lines ...
I1102 05:29:23.031425       1 controller.go:262] Node changes detected, triggering a full node sync on all loadbalancer services
I1102 05:29:23.031545       1 controller.go:730] Syncing backends for all LB services.
I1102 05:29:23.031580       1 controller.go:737] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I1102 05:29:23.502043       1 node_lifecycle_controller.go:893] Node 7663k8s000 is healthy again, removing all taints
I1102 05:29:23.502111       1 node_lifecycle_controller.go:893] Node 7663k8s001 is healthy again, removing all taints
I1102 05:29:23.502314       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
E1102 05:31:17.242578       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-4z44s" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
E1102 05:31:19.213463       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-694/default: secrets "default-token-vck97" is forbidden: unable to create new content in namespace azuredisk-694 because it is being terminated
I1102 05:31:20.486148       1 event.go:291] "Event occurred" object="azuredisk-5356/pvc-snrnj" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
E1102 05:31:21.308693       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-495/default: secrets "default-token-q4g9b" is forbidden: unable to create new content in namespace azuredisk-495 because it is being terminated
I1102 05:31:22.262412       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8081
E1102 05:31:22.328582       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9947/default: secrets "default-token-d8v52" is forbidden: unable to create new content in namespace azuredisk-9947 because it is being terminated
I1102 05:31:23.128500       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 StorageAccountType:Standard_LRS Size:10
I1102 05:31:23.157433       1 pv_controller.go:1676] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" provisioned for claim "azuredisk-5356/pvc-snrnj"
I1102 05:31:23.157713       1 event.go:291] "Event occurred" object="azuredisk-5356/pvc-snrnj" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 using kubernetes.io/azure-disk"
I1102 05:31:23.212724       1 pv_controller.go:879] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" entered phase "Bound"
I1102 05:31:23.212849       1 pv_controller.go:982] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" bound to claim "azuredisk-5356/pvc-snrnj"
I1102 05:31:23.243336       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1318
... skipping 12 lines ...
I1102 05:31:34.213534       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326") from node "7663k8s000" 
I1102 05:31:34.213951       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-p7rc6" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326\" "
I1102 05:32:26.635973       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-5356/pvc-snrnj"
I1102 05:32:26.648096       1 pv_controller.go:640] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" is released and reclaim policy "Delete" will be executed
I1102 05:32:26.652099       1 pv_controller.go:879] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" entered phase "Released"
I1102 05:32:26.670450       1 pv_controller.go:1340] isVolumeReleased[pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326]: volume is released
I1102 05:32:26.716101       1 pv_controller.go:879] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" entered phase "Failed"
E1102 05:32:26.716265       1 goroutinemap.go:150] Operation for "delete-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326[0757669b-7da2-41fc-a93d-415cc2b5ba30]" failed. No retries permitted until 2021-11-02 05:32:27.216244647 +0000 UTC m=+309.707138686 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:32:26.716566       1 event.go:291] "Event occurred" object="pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted"
I1102 05:32:32.981635       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326") on node "7663k8s000" 
I1102 05:32:33.000050       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326") on node "7663k8s000" 
I1102 05:32:33.010470       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 from node "7663k8s000"
I1102 05:32:33.010509       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326"
I1102 05:32:33.010522       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326)
I1102 05:32:38.583830       1 pv_controller.go:1340] isVolumeReleased[pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326]: volume is released
E1102 05:32:38.584091       1 goroutinemap.go:150] Operation for "delete-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326[0757669b-7da2-41fc-a93d-415cc2b5ba30]" failed. No retries permitted until 2021-11-02 05:32:39.584064873 +0000 UTC m=+322.074958912 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326) since it's in attaching or detaching state
I1102 05:32:48.619276       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326) returned with <nil>
I1102 05:32:48.619321       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326) succeeded
I1102 05:32:48.619331       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326 was detached from node:7663k8s000
I1102 05:32:48.619359       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326") on node "7663k8s000" 
I1102 05:32:53.591273       1 pv_controller.go:1340] isVolumeReleased[pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326]: volume is released
I1102 05:32:58.792751       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326
I1102 05:32:58.792991       1 pv_controller.go:1435] volume "pvc-f58d60c1-abf1-4e6c-9aeb-ff7f96117326" deleted
I1102 05:32:58.832725       1 pv_controller_base.go:505] deletion of claim "azuredisk-5356/pvc-snrnj" was already processed
E1102 05:33:08.149158       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-5cq8l" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
E1102 05:33:10.384871       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-6413/default: secrets "default-token-bkxv7" is forbidden: unable to create new content in namespace azuredisk-6413 because it is being terminated
I1102 05:33:13.260520       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5356
I1102 05:33:14.086862       1 event.go:291] "Event occurred" object="azuredisk-2451/pvc-smqjq" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
E1102 05:33:14.864330       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1957/default: secrets "default-token-dz7th" is forbidden: unable to create new content in namespace azuredisk-1957 because it is being terminated
I1102 05:33:15.477236       1 namespace_controller.go:185] Namespace has been deleted azuredisk-6413
I1102 05:33:16.688137       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 StorageAccountType:StandardSSD_LRS Size:10
I1102 05:33:16.720884       1 pv_controller.go:1676] volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" provisioned for claim "azuredisk-2451/pvc-smqjq"
I1102 05:33:16.721282       1 event.go:291] "Event occurred" object="azuredisk-2451/pvc-smqjq" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 using kubernetes.io/azure-disk"
I1102 05:33:16.725387       1 pv_controller.go:879] volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" entered phase "Bound"
I1102 05:33:16.725427       1 pv_controller.go:982] volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" bound to claim "azuredisk-2451/pvc-smqjq"
... skipping 10 lines ...
I1102 05:33:27.741974       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92") from node "7663k8s000" 
I1102 05:33:27.742233       1 event.go:291] "Event occurred" object="azuredisk-2451/azuredisk-volume-tester-v5n5j" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92\" "
I1102 05:34:03.889940       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-2451/pvc-smqjq"
I1102 05:34:03.906289       1 pv_controller.go:640] volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" is released and reclaim policy "Delete" will be executed
I1102 05:34:03.914272       1 pv_controller.go:879] volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" entered phase "Released"
I1102 05:34:03.918633       1 pv_controller.go:1340] isVolumeReleased[pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92]: volume is released
I1102 05:34:03.976318       1 pv_controller.go:879] volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" entered phase "Failed"
E1102 05:34:03.976396       1 goroutinemap.go:150] Operation for "delete-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92[bb7f0416-43c7-4796-a13c-a64f82ea8e0b]" failed. No retries permitted until 2021-11-02 05:34:04.476374379 +0000 UTC m=+406.967268318 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:34:03.976871       1 event.go:291] "Event occurred" object="pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted"
I1102 05:34:08.586716       1 pv_controller.go:1340] isVolumeReleased[pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92]: volume is released
E1102 05:34:08.611848       1 goroutinemap.go:150] Operation for "delete-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92[bb7f0416-43c7-4796-a13c-a64f82ea8e0b]" failed. No retries permitted until 2021-11-02 05:34:09.611822903 +0000 UTC m=+412.102716842 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:34:13.297242       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92") on node "7663k8s000" 
I1102 05:34:13.310946       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92") on node "7663k8s000" 
I1102 05:34:13.382111       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 from node "7663k8s000"
I1102 05:34:13.382161       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92"
I1102 05:34:13.382175       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92)
I1102 05:34:23.593865       1 pv_controller.go:1340] isVolumeReleased[pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92]: volume is released
E1102 05:34:23.593938       1 goroutinemap.go:150] Operation for "delete-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92[bb7f0416-43c7-4796-a13c-a64f82ea8e0b]" failed. No retries permitted until 2021-11-02 05:34:25.593917418 +0000 UTC m=+428.084811457 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92) since it's in attaching or detaching state
I1102 05:34:28.819504       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92) returned with <nil>
I1102 05:34:28.819544       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92) succeeded
I1102 05:34:28.819554       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92 was detached from node:7663k8s000
I1102 05:34:28.819744       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92") on node "7663k8s000" 
I1102 05:34:38.587769       1 pv_controller.go:1340] isVolumeReleased[pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92]: volume is released
I1102 05:34:43.803465       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-01b52b9d-419f-48d8-b243-8ccf18b4db92
... skipping 52 lines ...
I1102 05:37:12.855383       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-9828/azuredisk-volume-tester-2vgpf" PVC="azuredisk-9828/pvc-6lm4x"
I1102 05:37:12.855994       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-9828/pvc-6lm4x"
I1102 05:37:12.904315       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-9828/pvc-6lm4x"
I1102 05:37:12.921453       1 pv_controller.go:640] volume "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" is released and reclaim policy "Delete" will be executed
I1102 05:37:12.926422       1 pv_controller.go:879] volume "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" entered phase "Released"
I1102 05:37:12.941167       1 pv_controller.go:1340] isVolumeReleased[pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b]: volume is released
I1102 05:37:12.983652       1 pv_controller.go:879] volume "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" entered phase "Failed"
E1102 05:37:12.983707       1 goroutinemap.go:150] Operation for "delete-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b[64b6f458-4509-449e-9e09-7198755e8a9c]" failed. No retries permitted until 2021-11-02 05:37:13.483691951 +0000 UTC m=+595.974585990 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s001), could not be deleted
I1102 05:37:12.983788       1 event.go:291] "Event occurred" object="pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s001), could not be deleted"
I1102 05:37:13.940907       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b") on node "7663k8s001" 
I1102 05:37:13.954674       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b") on node "7663k8s001" 
I1102 05:37:14.073959       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b from node "7663k8s001"
I1102 05:37:14.074003       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b"
I1102 05:37:14.074017       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b)
I1102 05:37:23.601850       1 pv_controller.go:1340] isVolumeReleased[pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b]: volume is released
E1102 05:37:23.601922       1 goroutinemap.go:150] Operation for "delete-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b[64b6f458-4509-449e-9e09-7198755e8a9c]" failed. No retries permitted until 2021-11-02 05:37:24.601899103 +0000 UTC m=+607.092793142 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b) since it's in attaching or detaching state
I1102 05:37:29.591236       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b) returned with <nil>
I1102 05:37:29.591278       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b) succeeded
I1102 05:37:29.591287       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b was detached from node:7663k8s001
I1102 05:37:29.591515       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b") on node "7663k8s001" 
I1102 05:37:38.595585       1 pv_controller.go:1340] isVolumeReleased[pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b]: volume is released
I1102 05:37:43.827347       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d0d48d2e-1dc4-48c8-90f3-b9bbb1023a5b
... skipping 6 lines ...
I1102 05:37:49.479418       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-9828/azuredisk-volume-tester-xtlxb" PVC="azuredisk-9828/pvc-jssk6"
I1102 05:37:49.479446       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-9828/pvc-jssk6"
I1102 05:37:49.590448       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-9828/pvc-jssk6"
I1102 05:37:49.600260       1 pv_controller.go:640] volume "pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" is released and reclaim policy "Delete" will be executed
I1102 05:37:49.603374       1 pv_controller.go:879] volume "pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" entered phase "Released"
I1102 05:37:49.638614       1 pv_controller.go:1340] isVolumeReleased[pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b]: volume is released
I1102 05:37:49.673174       1 pv_controller.go:879] volume "pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" entered phase "Failed"
E1102 05:37:49.673712       1 goroutinemap.go:150] Operation for "delete-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b[1d4a013b-df24-45b9-bbe6-9265bfbd75f4]" failed. No retries permitted until 2021-11-02 05:37:50.173693188 +0000 UTC m=+632.664587127 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:37:49.674076       1 event.go:291] "Event occurred" object="pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted"
I1102 05:37:53.599465       1 pv_controller.go:1340] isVolumeReleased[pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b]: volume is released
E1102 05:37:53.630468       1 goroutinemap.go:150] Operation for "delete-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b[1d4a013b-df24-45b9-bbe6-9265bfbd75f4]" failed. No retries permitted until 2021-11-02 05:37:54.630445636 +0000 UTC m=+637.121339675 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:37:53.750768       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b") on node "7663k8s000" 
I1102 05:37:53.767615       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b") on node "7663k8s000" 
I1102 05:37:53.767677       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b from node "7663k8s000"
I1102 05:37:53.826829       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b"
I1102 05:37:53.826858       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b)
I1102 05:38:04.233106       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-0f3b7945-99a1-4143-90f8-0ddb1fdc931b) returned with <nil>
... skipping 11 lines ...
I1102 05:38:21.608533       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-9828/azuredisk-volume-tester-fpcmq" PVC="azuredisk-9828/pvc-k4stb"
I1102 05:38:21.608561       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-9828/pvc-k4stb"
I1102 05:38:21.632284       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-9828/pvc-k4stb"
I1102 05:38:21.642888       1 pv_controller.go:640] volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" is released and reclaim policy "Delete" will be executed
I1102 05:38:21.646355       1 pv_controller.go:879] volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" entered phase "Released"
I1102 05:38:21.650694       1 pv_controller.go:1340] isVolumeReleased[pvc-a7576988-640b-44f5-be10-0c2dd6754b25]: volume is released
I1102 05:38:21.680186       1 pv_controller.go:879] volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" entered phase "Failed"
E1102 05:38:21.680541       1 goroutinemap.go:150] Operation for "delete-pvc-a7576988-640b-44f5-be10-0c2dd6754b25[25f9a116-b38b-441b-99f6-a06531aee756]" failed. No retries permitted until 2021-11-02 05:38:22.180520308 +0000 UTC m=+664.671414547 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s001), could not be deleted
I1102 05:38:21.680743       1 event.go:291] "Event occurred" object="pvc-a7576988-640b-44f5-be10-0c2dd6754b25" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s001), could not be deleted"
I1102 05:38:23.599211       1 pv_controller.go:1340] isVolumeReleased[pvc-a7576988-640b-44f5-be10-0c2dd6754b25]: volume is released
E1102 05:38:23.624791       1 goroutinemap.go:150] Operation for "delete-pvc-a7576988-640b-44f5-be10-0c2dd6754b25[25f9a116-b38b-441b-99f6-a06531aee756]" failed. No retries permitted until 2021-11-02 05:38:24.624757104 +0000 UTC m=+667.115651043 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s001), could not be deleted
I1102 05:38:24.087255       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25") on node "7663k8s001" 
I1102 05:38:24.095494       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25") on node "7663k8s001" 
I1102 05:38:24.107079       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25 from node "7663k8s001"
I1102 05:38:24.107373       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25"
I1102 05:38:24.107518       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25)
I1102 05:38:38.597938       1 pv_controller.go:1340] isVolumeReleased[pvc-a7576988-640b-44f5-be10-0c2dd6754b25]: volume is released
E1102 05:38:38.598005       1 goroutinemap.go:150] Operation for "delete-pvc-a7576988-640b-44f5-be10-0c2dd6754b25[25f9a116-b38b-441b-99f6-a06531aee756]" failed. No retries permitted until 2021-11-02 05:38:40.597986241 +0000 UTC m=+683.088880280 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25) since it's in attaching or detaching state
I1102 05:38:39.593685       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25) returned with <nil>
I1102 05:38:39.593884       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25) succeeded
I1102 05:38:39.593931       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25 was detached from node:7663k8s001
I1102 05:38:39.593974       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25") on node "7663k8s001" 
I1102 05:38:53.607249       1 pv_controller.go:1340] isVolumeReleased[pvc-a7576988-640b-44f5-be10-0c2dd6754b25]: volume is released
I1102 05:38:58.787192       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-a7576988-640b-44f5-be10-0c2dd6754b25
I1102 05:38:58.787227       1 pv_controller.go:1435] volume "pvc-a7576988-640b-44f5-be10-0c2dd6754b25" deleted
I1102 05:38:58.821436       1 pv_controller_base.go:505] deletion of claim "azuredisk-9828/pvc-k4stb" was already processed
I1102 05:39:02.363517       1 event.go:291] "Event occurred" object="azuredisk-1563/pvc-9gz5b" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:39:02.486513       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6" need=1 creating=1
I1102 05:39:02.487176       1 event.go:291] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-tbdzp" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azuredisk-volume-tester-tbdzp-57c6f9fc6 to 1"
I1102 05:39:02.513295       1 event.go:291] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-tbdzp-57c6f9fc6-4n4s4"
I1102 05:39:02.544257       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1563/azuredisk-volume-tester-tbdzp" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-tbdzp\": the object has been modified; please apply your changes to the latest version and try again"
I1102 05:39:04.921875       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 StorageAccountType:StandardSSD_LRS Size:10
I1102 05:39:04.958919       1 pv_controller.go:1676] volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" provisioned for claim "azuredisk-1563/pvc-9gz5b"
I1102 05:39:04.959235       1 event.go:291] "Event occurred" object="azuredisk-1563/pvc-9gz5b" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 using kubernetes.io/azure-disk"
I1102 05:39:04.981333       1 pv_controller.go:879] volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" entered phase "Bound"
I1102 05:39:04.981395       1 pv_controller.go:982] volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" bound to claim "azuredisk-1563/pvc-9gz5b"
I1102 05:39:05.039684       1 pv_controller.go:823] claim "azuredisk-1563/pvc-9gz5b" entered phase "Bound"
I1102 05:39:05.576204       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") from node "7663k8s001" 
I1102 05:39:05.616908       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" to node "7663k8s001".
I1102 05:39:05.653828       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" lun 0 to node "7663k8s001".
I1102 05:39:05.653892       1 azure_controller_standard.go:93] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - attach disk(kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) with DiskEncryptionSetID()
E1102 05:39:05.680811       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9828/default: secrets "default-token-7fw54" is forbidden: unable to create new content in namespace azuredisk-9828 because it is being terminated
I1102 05:39:10.749931       1 namespace_controller.go:185] Namespace has been deleted azuredisk-9828
I1102 05:39:16.162340       1 azure_controller_standard.go:111] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - attach disk(kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) returned with <nil>
I1102 05:39:16.162381       1 attacher.go:89] Attach operation successful: volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" attached to node "7663k8s001".
I1102 05:39:16.162437       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") from node "7663k8s001" 
I1102 05:39:16.162500       1 event.go:291] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6-4n4s4" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5\" "
I1102 05:39:52.130041       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6" need=1 creating=1
I1102 05:39:52.156070       1 event.go:291] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-tbdzp-57c6f9fc6-wlddm"
W1102 05:39:52.228667       1 reconciler.go:376] Multi-Attach error for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") from node "7663k8s000" Volume is already used by pods azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6-4n4s4 on node 7663k8s001
I1102 05:39:52.228939       1 event.go:291] "Event occurred" object="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6-wlddm" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5\" Volume is already used by pod(s) azuredisk-volume-tester-tbdzp-57c6f9fc6-4n4s4"
I1102 05:40:04.328027       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") on node "7663k8s001" 
I1102 05:40:04.341796       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") on node "7663k8s001" 
I1102 05:40:04.352343       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 from node "7663k8s001"
I1102 05:40:04.352379       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5"
I1102 05:40:04.352393       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5)
I1102 05:40:19.835358       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) returned with <nil>
... skipping 20 lines ...
I1102 05:41:22.700372       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-1563/azuredisk-volume-tester-tbdzp-57c6f9fc6-wlddm" PVC="azuredisk-1563/pvc-9gz5b"
I1102 05:41:22.700402       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-1563/pvc-9gz5b"
I1102 05:41:22.808446       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-1563/pvc-9gz5b"
I1102 05:41:22.838378       1 pv_controller.go:640] volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" is released and reclaim policy "Delete" will be executed
I1102 05:41:22.841603       1 pv_controller.go:879] volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" entered phase "Released"
I1102 05:41:22.845504       1 pv_controller.go:1340] isVolumeReleased[pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5]: volume is released
I1102 05:41:22.891386       1 pv_controller.go:879] volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" entered phase "Failed"
E1102 05:41:22.891462       1 goroutinemap.go:150] Operation for "delete-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5[e05c6311-d895-48d7-8d30-2551b8050ef8]" failed. No retries permitted until 2021-11-02 05:41:23.391424262 +0000 UTC m=+845.882318301 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:41:22.891789       1 event.go:291] "Event occurred" object="pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted"
I1102 05:41:23.602333       1 pv_controller.go:1340] isVolumeReleased[pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5]: volume is released
E1102 05:41:23.627216       1 goroutinemap.go:150] Operation for "delete-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5[e05c6311-d895-48d7-8d30-2551b8050ef8]" failed. No retries permitted until 2021-11-02 05:41:24.627197735 +0000 UTC m=+847.118091774 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:41:34.268153       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") on node "7663k8s000" 
I1102 05:41:34.319800       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") on node "7663k8s000" 
I1102 05:41:34.329333       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 from node "7663k8s000"
I1102 05:41:34.329372       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5"
I1102 05:41:34.329385       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5)
I1102 05:41:38.602195       1 pv_controller.go:1340] isVolumeReleased[pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5]: volume is released
E1102 05:41:38.602398       1 goroutinemap.go:150] Operation for "delete-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5[e05c6311-d895-48d7-8d30-2551b8050ef8]" failed. No retries permitted until 2021-11-02 05:41:40.602242961 +0000 UTC m=+863.093137000 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) since it's in attaching or detaching state
I1102 05:41:49.885492       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) returned with <nil>
I1102 05:41:49.885537       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5) succeeded
I1102 05:41:49.885546       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5 was detached from node:7663k8s000
I1102 05:41:49.885570       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5") on node "7663k8s000" 
I1102 05:41:53.617430       1 pv_controller.go:1340] isVolumeReleased[pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5]: volume is released
I1102 05:41:59.101015       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-dd421a91-de08-45cd-a6ff-8aa68774e6d5
... skipping 2 lines ...
I1102 05:42:05.776439       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-7a5d4cb0-774b-4817-b562-780b435652bd StorageAccountType:StandardSSD_LRS Size:10
I1102 05:42:05.809459       1 pv_controller.go:1676] volume "pvc-7a5d4cb0-774b-4817-b562-780b435652bd" provisioned for claim "azuredisk-1577/pvc-8kq58"
I1102 05:42:05.809589       1 event.go:291] "Event occurred" object="azuredisk-1577/pvc-8kq58" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-7a5d4cb0-774b-4817-b562-780b435652bd using kubernetes.io/azure-disk"
I1102 05:42:05.823606       1 pv_controller.go:879] volume "pvc-7a5d4cb0-774b-4817-b562-780b435652bd" entered phase "Bound"
I1102 05:42:05.823642       1 pv_controller.go:982] volume "pvc-7a5d4cb0-774b-4817-b562-780b435652bd" bound to claim "azuredisk-1577/pvc-8kq58"
I1102 05:42:05.866745       1 pv_controller.go:823] claim "azuredisk-1577/pvc-8kq58" entered phase "Bound"
E1102 05:42:06.409740       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1563/default: secrets "default-token-kjcqh" is forbidden: unable to create new content in namespace azuredisk-1563 because it is being terminated
I1102 05:42:07.956145       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-1577/pvc-8kq58"
I1102 05:42:07.966422       1 pv_controller.go:640] volume "pvc-7a5d4cb0-774b-4817-b562-780b435652bd" is released and reclaim policy "Delete" will be executed
I1102 05:42:07.969409       1 pv_controller.go:879] volume "pvc-7a5d4cb0-774b-4817-b562-780b435652bd" entered phase "Released"
I1102 05:42:07.973536       1 pv_controller.go:1340] isVolumeReleased[pvc-7a5d4cb0-774b-4817-b562-780b435652bd]: volume is released
I1102 05:42:11.518256       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1563
I1102 05:42:13.179644       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7a5d4cb0-774b-4817-b562-780b435652bd
I1102 05:42:13.179767       1 pv_controller.go:1435] volume "pvc-7a5d4cb0-774b-4817-b562-780b435652bd" deleted
I1102 05:42:13.209806       1 pv_controller_base.go:505] deletion of claim "azuredisk-1577/pvc-8kq58" was already processed
E1102 05:42:23.719449       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1577/default: secrets "default-token-pv845" is forbidden: unable to create new content in namespace azuredisk-1577 because it is being terminated
E1102 05:42:25.955395       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-953/default: secrets "default-token-2xpdm" is forbidden: unable to create new content in namespace azuredisk-953 because it is being terminated
I1102 05:42:27.599976       1 event.go:291] "Event occurred" object="azuredisk-552/pvc-r7b75" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:42:27.814574       1 event.go:291] "Event occurred" object="azuredisk-552/pvc-fr4c8" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:42:28.035190       1 event.go:291] "Event occurred" object="azuredisk-552/pvc-b8cd7" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:42:28.825686       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1577
I1102 05:42:30.585500       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70 StorageAccountType:StandardSSD_LRS Size:10
I1102 05:42:30.611455       1 pv_controller.go:1676] volume "pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70" provisioned for claim "azuredisk-552/pvc-r7b75"
... skipping 41 lines ...
I1102 05:43:02.703594       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d") from node "7663k8s000" 
I1102 05:43:02.703902       1 event.go:291] "Event occurred" object="azuredisk-552/azuredisk-volume-tester-ph8r6" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d\" "
I1102 05:43:36.278517       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-552/pvc-b8cd7"
I1102 05:43:36.288488       1 pv_controller.go:640] volume "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" is released and reclaim policy "Delete" will be executed
I1102 05:43:36.291295       1 pv_controller.go:879] volume "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" entered phase "Released"
I1102 05:43:36.306268       1 pv_controller.go:1340] isVolumeReleased[pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3]: volume is released
I1102 05:43:36.349405       1 pv_controller.go:879] volume "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" entered phase "Failed"
E1102 05:43:36.349594       1 goroutinemap.go:150] Operation for "delete-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3[f7df73dc-b1f1-4070-8ab5-00c86924dafc]" failed. No retries permitted until 2021-11-02 05:43:36.84957472 +0000 UTC m=+979.340468659 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:43:36.349757       1 event.go:291] "Event occurred" object="pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted"
I1102 05:43:38.606628       1 pv_controller.go:1340] isVolumeReleased[pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3]: volume is released
E1102 05:43:38.632253       1 goroutinemap.go:150] Operation for "delete-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3[f7df73dc-b1f1-4070-8ab5-00c86924dafc]" failed. No retries permitted until 2021-11-02 05:43:39.632230005 +0000 UTC m=+982.123123944 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:43:44.677127       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70") on node "7663k8s000" 
I1102 05:43:44.715162       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3") on node "7663k8s000" 
I1102 05:43:44.716346       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70") on node "7663k8s000" 
I1102 05:43:44.728540       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70 from node "7663k8s000"
I1102 05:43:44.728576       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70"
I1102 05:43:44.728589       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70)
I1102 05:43:44.736882       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3") on node "7663k8s000" 
I1102 05:43:44.736935       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 from node "7663k8s000"
I1102 05:43:44.739381       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d") on node "7663k8s000" 
I1102 05:43:44.771042       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d") on node "7663k8s000" 
I1102 05:43:44.771096       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d from node "7663k8s000"
I1102 05:43:53.613114       1 pv_controller.go:1340] isVolumeReleased[pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3]: volume is released
E1102 05:43:53.668028       1 goroutinemap.go:150] Operation for "delete-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3[f7df73dc-b1f1-4070-8ab5-00c86924dafc]" failed. No retries permitted until 2021-11-02 05:43:55.668006795 +0000 UTC m=+998.158900734 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:44:00.189896       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70) returned with <nil>
I1102 05:44:00.189940       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70) succeeded
I1102 05:44:00.189951       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70 was detached from node:7663k8s000
I1102 05:44:00.189974       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70") on node "7663k8s000" 
I1102 05:44:00.232895       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3"
I1102 05:44:00.233252       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3)
I1102 05:44:08.613629       1 pv_controller.go:1340] isVolumeReleased[pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3]: volume is released
E1102 05:44:08.613852       1 goroutinemap.go:150] Operation for "delete-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3[f7df73dc-b1f1-4070-8ab5-00c86924dafc]" failed. No retries permitted until 2021-11-02 05:44:12.613826184 +0000 UTC m=+1015.104720123 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) since it's in attaching or detaching state
I1102 05:44:15.641218       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) returned with <nil>
I1102 05:44:15.641267       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3) succeeded
I1102 05:44:15.641278       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3 was detached from node:7663k8s000
I1102 05:44:15.641472       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-020e57c9-32df-4fc3-b9c3-ab21d21f7fa3") on node "7663k8s000" 
I1102 05:44:15.676520       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d"
I1102 05:44:15.676550       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-c35a1a71-ddd2-448c-a7f2-5a2c7286f14d)
... skipping 19 lines ...
I1102 05:44:49.137101       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70
I1102 05:44:49.137140       1 pv_controller.go:1435] volume "pvc-76e27a4e-f154-4aa2-8c89-ecf842675c70" deleted
I1102 05:44:49.167210       1 pv_controller_base.go:505] deletion of claim "azuredisk-552/pvc-r7b75" was already processed
I1102 05:45:03.504576       1 event.go:291] "Event occurred" object="azuredisk-9267/pvc-zxqnl" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:45:03.713327       1 event.go:291] "Event occurred" object="azuredisk-9267/pvc-dzcc7" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:45:03.928240       1 event.go:291] "Event occurred" object="azuredisk-9267/pvc-rgp9p" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
E1102 05:45:04.277465       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3410/default: serviceaccounts "default" not found
I1102 05:45:05.032157       1 namespace_controller.go:185] Namespace has been deleted azuredisk-552
I1102 05:45:06.431565       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259 StorageAccountType:Premium_LRS Size:10
I1102 05:45:06.457621       1 pv_controller.go:1676] volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" provisioned for claim "azuredisk-9267/pvc-dzcc7"
I1102 05:45:06.458246       1 event.go:291] "Event occurred" object="azuredisk-9267/pvc-dzcc7" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-3949e77f-548c-4f56-9254-0d3a2edfc259 using kubernetes.io/azure-disk"
I1102 05:45:06.463155       1 pv_controller.go:879] volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" entered phase "Bound"
I1102 05:45:06.463187       1 pv_controller.go:982] volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" bound to claim "azuredisk-9267/pvc-dzcc7"
... skipping 38 lines ...
I1102 05:45:38.795439       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259") from node "7663k8s000" 
I1102 05:45:38.795648       1 event.go:291] "Event occurred" object="azuredisk-9267/azuredisk-volume-tester-zb6xz" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-3949e77f-548c-4f56-9254-0d3a2edfc259\" "
I1102 05:46:12.186318       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-9267/pvc-rgp9p"
I1102 05:46:12.204679       1 pv_controller.go:640] volume "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" is released and reclaim policy "Delete" will be executed
I1102 05:46:12.207586       1 pv_controller.go:879] volume "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" entered phase "Released"
I1102 05:46:12.226497       1 pv_controller.go:1340] isVolumeReleased[pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2]: volume is released
I1102 05:46:12.260557       1 pv_controller.go:879] volume "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" entered phase "Failed"
E1102 05:46:12.260628       1 goroutinemap.go:150] Operation for "delete-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2[8e1b588d-7808-4c57-8068-d7fa7c953ed4]" failed. No retries permitted until 2021-11-02 05:46:12.760608318 +0000 UTC m=+1135.251502357 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:46:12.260664       1 event.go:291] "Event occurred" object="pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted"
I1102 05:46:23.618877       1 pv_controller.go:1340] isVolumeReleased[pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2]: volume is released
E1102 05:46:23.664760       1 goroutinemap.go:150] Operation for "delete-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2[8e1b588d-7808-4c57-8068-d7fa7c953ed4]" failed. No retries permitted until 2021-11-02 05:46:24.664741882 +0000 UTC m=+1147.155635821 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:46:25.200111       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259") on node "7663k8s000" 
I1102 05:46:25.219224       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259") on node "7663k8s000" 
I1102 05:46:25.221324       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b") on node "7663k8s000" 
I1102 05:46:25.243119       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2") on node "7663k8s000" 
I1102 05:46:25.243549       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259 from node "7663k8s000"
I1102 05:46:25.243729       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259"
I1102 05:46:25.243846       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259)
I1102 05:46:25.244305       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b") on node "7663k8s000" 
I1102 05:46:25.244402       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b from node "7663k8s000"
I1102 05:46:25.249401       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2") on node "7663k8s000" 
I1102 05:46:25.249767       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 from node "7663k8s000"
I1102 05:46:38.614094       1 pv_controller.go:1340] isVolumeReleased[pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2]: volume is released
E1102 05:46:38.652998       1 goroutinemap.go:150] Operation for "delete-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2[8e1b588d-7808-4c57-8068-d7fa7c953ed4]" failed. No retries permitted until 2021-11-02 05:46:40.652947266 +0000 UTC m=+1163.143841305 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:46:40.699064       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259) returned with <nil>
I1102 05:46:40.699107       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259) succeeded
I1102 05:46:40.699116       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259 was detached from node:7663k8s000
I1102 05:46:40.699258       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-3949e77f-548c-4f56-9254-0d3a2edfc259" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-3949e77f-548c-4f56-9254-0d3a2edfc259") on node "7663k8s000" 
I1102 05:46:40.734909       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b"
I1102 05:46:40.734937       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b)
I1102 05:46:53.618680       1 pv_controller.go:1340] isVolumeReleased[pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2]: volume is released
E1102 05:46:53.644001       1 goroutinemap.go:150] Operation for "delete-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2[8e1b588d-7808-4c57-8068-d7fa7c953ed4]" failed. No retries permitted until 2021-11-02 05:46:57.643982124 +0000 UTC m=+1180.134876063 (durationBeforeRetry 4s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/virtualMachines/7663k8s000), could not be deleted
I1102 05:46:56.150343       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b) returned with <nil>
I1102 05:46:56.150389       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b) succeeded
I1102 05:46:56.150399       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b was detached from node:7663k8s000
I1102 05:46:56.150423       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b") on node "7663k8s000" 
I1102 05:46:56.193350       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2"
I1102 05:46:56.193381       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2)
I1102 05:47:08.615312       1 pv_controller.go:1340] isVolumeReleased[pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2]: volume is released
E1102 05:47:08.615527       1 goroutinemap.go:150] Operation for "delete-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2[8e1b588d-7808-4c57-8068-d7fa7c953ed4]" failed. No retries permitted until 2021-11-02 05:47:16.615367742 +0000 UTC m=+1199.106261681 (durationBeforeRetry 8s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) since it's in attaching or detaching state
I1102 05:47:11.646215       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) returned with <nil>
I1102 05:47:11.646251       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2) succeeded
I1102 05:47:11.646260       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2 was detached from node:7663k8s000
I1102 05:47:11.646384       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2") on node "7663k8s000" 
I1102 05:47:23.621999       1 pv_controller.go:1340] isVolumeReleased[pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2]: volume is released
I1102 05:47:28.845775       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-7eddb277-ba61-4147-89fa-8aa03ccd8fc2
... skipping 10 lines ...
I1102 05:47:40.231828       1 pv_controller.go:640] volume "pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b" is released and reclaim policy "Delete" will be executed
I1102 05:47:40.237116       1 pv_controller.go:879] volume "pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b" entered phase "Released"
I1102 05:47:40.249598       1 pv_controller.go:1340] isVolumeReleased[pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b]: volume is released
I1102 05:47:45.468139       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b
I1102 05:47:45.468173       1 pv_controller.go:1435] volume "pvc-37cf9797-3222-41b2-b46a-d597f6cdc29b" deleted
I1102 05:47:45.488296       1 pv_controller_base.go:505] deletion of claim "azuredisk-9267/pvc-zxqnl" was already processed
E1102 05:47:56.280912       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9267/default: secrets "default-token-f6wzc" is forbidden: unable to create new content in namespace azuredisk-9267 because it is being terminated
I1102 05:47:57.741617       1 event.go:291] "Event occurred" object="azuredisk-7175/pvc-rqs6t" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 05:48:00.259670       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-492e741c-c409-4a9b-b8b4-f8ddc6fe7ec7 StorageAccountType:Standard_LRS Size:10
I1102 05:48:00.291033       1 pv_controller.go:1676] volume "pvc-492e741c-c409-4a9b-b8b4-f8ddc6fe7ec7" provisioned for claim "azuredisk-7175/pvc-rqs6t"
I1102 05:48:00.291432       1 event.go:291] "Event occurred" object="azuredisk-7175/pvc-rqs6t" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-492e741c-c409-4a9b-b8b4-f8ddc6fe7ec7 using kubernetes.io/azure-disk"
I1102 05:48:00.295990       1 pv_controller.go:879] volume "pvc-492e741c-c409-4a9b-b8b4-f8ddc6fe7ec7" entered phase "Bound"
I1102 05:48:00.296025       1 pv_controller.go:982] volume "pvc-492e741c-c409-4a9b-b8b4-f8ddc6fe7ec7" bound to claim "azuredisk-7175/pvc-rqs6t"
... skipping 32 lines ...
I1102 05:49:25.573672       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910 StorageAccountType:StandardSSD_LRS Size:10
I1102 05:49:25.601641       1 pv_controller.go:1676] volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" provisioned for claim "azuredisk-1528/pvc-azuredisk-volume-tester-rv2z9-0"
I1102 05:49:25.601718       1 event.go:291] "Event occurred" object="azuredisk-1528/pvc-azuredisk-volume-tester-rv2z9-0" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910 using kubernetes.io/azure-disk"
I1102 05:49:25.605793       1 pv_controller.go:879] volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" entered phase "Bound"
I1102 05:49:25.605827       1 pv_controller.go:982] volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" bound to claim "azuredisk-1528/pvc-azuredisk-volume-tester-rv2z9-0"
I1102 05:49:25.625003       1 pv_controller.go:823] claim "azuredisk-1528/pvc-azuredisk-volume-tester-rv2z9-0" entered phase "Bound"
E1102 05:49:26.107574       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7175/default: secrets "default-token-npdcr" is forbidden: unable to create new content in namespace azuredisk-7175 because it is being terminated
I1102 05:49:26.287688       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910") from node "7663k8s001" 
I1102 05:49:26.357743       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" to node "7663k8s001".
I1102 05:49:26.406272       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" lun 0 to node "7663k8s001".
I1102 05:49:26.406309       1 azure_controller_standard.go:93] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - attach disk(kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910) with DiskEncryptionSetID()
I1102 05:49:31.158552       1 namespace_controller.go:185] Namespace has been deleted azuredisk-7175
I1102 05:49:36.891046       1 azure_controller_standard.go:111] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - attach disk(kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910) returned with <nil>
I1102 05:49:36.891085       1 attacher.go:89] Attach operation successful: volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" attached to node "7663k8s001".
I1102 05:49:36.891146       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910") from node "7663k8s001" 
I1102 05:49:36.891426       1 event.go:291] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-rv2z9-0" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910\" "
I1102 05:50:13.207589       1 event.go:291] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-rv2z9" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod azuredisk-volume-tester-rv2z9-0 in StatefulSet azuredisk-volume-tester-rv2z9 successful"
I1102 05:50:13.287745       1 event.go:291] "Event occurred" object="azuredisk-1528/azuredisk-volume-tester-rv2z9-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910\" Volume is already exclusively attached to one node and can't be attached to another"
W1102 05:50:13.287811       1 reconciler.go:335] Multi-Attach error for volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910") from node "7663k8s000" Volume is already exclusively attached to node 7663k8s001 and can't be attached to another
I1102 05:50:25.497750       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910") on node "7663k8s001" 
I1102 05:50:25.505872       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910") on node "7663k8s001" 
I1102 05:50:25.506116       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910 from node "7663k8s001"
I1102 05:50:25.506280       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910"
I1102 05:50:25.506417       1 azure_controller_standard.go:166] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910)
I1102 05:50:40.955841       1 azure_controller_standard.go:184] azureDisk - update(kubetest-cta3zcyi): vm(7663k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-cta3zcyi/providers/Microsoft.Compute/disks/kubetest-cta3zcyi-dynamic-pvc-d7f88d68-f9de-43f5-8fe6-eedc50778910) returned with <nil>
... skipping 17 lines ...
I1102 05:51:47.004729       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-1528/pvc-azuredisk-volume-tester-rv2z9-0"
2021/11/02 05:51:49 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 9 of 53 Specs in 1239.484 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 44 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 117 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 46 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 8 lines ...
Warning: Permanently added '7663k8s001' (ECDSA) to the list of known hosts.
Collecting logs for vm 7663k8s002
Testing connection to host 7663k8s002.
Warning: Permanently added 'kubetest-cta3zcyi.uksouth.cloudapp.azure.com,20.108.117.84' (ECDSA) to the list of known hosts.

Authorized uses only. All activity may be monitored and reported.
channel 0: open failed: connect failed: Temporary failure in name resolution

stdio forwarding failed

ssh_exchange_identification: Connection closed by remote host
2021/11/02 05:53:51 process.go:155: Step 'bash -c /root/tmp555149321/win-ci-logs-collector.sh kubetest-cta3zcyi.uksouth.cloudapp.azure.com /root/tmp555149321 /etc/ssh-key-secret/ssh-private' finished in 1m26.585550981s
2021/11/02 05:53:51 aksengine.go:1141: Deleting resource group: kubetest-cta3zcyi.