This job view page is being replaced by Spyglass soon. Check out the new job view.
PRnilo19: Automated cherry pick of #105839: fix: remove VMSS and VMSS instances from SLB backend pool
ResultABORTED
Tests 0 failed / 9 succeeded
Started2021-11-02 05:56
Elapsed55m15s
Revisionf04c7de7f1ff6219c988a44559baeb1de9459c91
Refs 106072

No Test Failures!


Show 9 Passed Tests

Show 44 Skipped Tests

Error lines from build-log.txt

... skipping 201 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov  2 06:26:08.282: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vwmvx" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Nov  2 06:26:08.387: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Pending", Reason="", readiness=false. Elapsed: 104.721814ms
Nov  2 06:26:10.495: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212642312s
Nov  2 06:26:12.600: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318157902s
Nov  2 06:26:14.705: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.422692988s
Nov  2 06:26:16.809: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.526927984s
Nov  2 06:26:18.915: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.632829107s
... skipping 23 lines ...
Nov  2 06:27:09.481: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.198833101s
Nov  2 06:27:11.587: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.305049375s
Nov  2 06:27:13.700: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Running", Reason="", readiness=true. Elapsed: 1m5.41804095s
Nov  2 06:27:15.807: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Running", Reason="", readiness=true. Elapsed: 1m7.525071247s
Nov  2 06:27:17.914: INFO: Pod "azuredisk-volume-tester-vwmvx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m9.632143426s
STEP: Saw pod success
Nov  2 06:27:17.914: INFO: Pod "azuredisk-volume-tester-vwmvx" satisfied condition "Succeeded or Failed"
Nov  2 06:27:17.914: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-vwmvx"
Nov  2 06:27:18.090: INFO: Pod azuredisk-volume-tester-vwmvx has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-vwmvx in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 06:27:18.424: INFO: deleting PVC "azuredisk-8081"/"pvc-q9tr5"
Nov  2 06:27:18.424: INFO: Deleting PersistentVolumeClaim "pvc-q9tr5"
STEP: waiting for claim's PV "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" to be deleted
Nov  2 06:27:18.530: INFO: Waiting up to 10m0s for PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 to get deleted
Nov  2 06:27:18.638: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (107.725287ms)
Nov  2 06:27:23.744: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (5.213746439s)
Nov  2 06:27:28.854: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (10.32349432s)
Nov  2 06:27:33.961: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (15.430764882s)
Nov  2 06:27:39.069: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (20.538315477s)
Nov  2 06:27:44.176: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (25.645865504s)
Nov  2 06:27:49.286: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (30.755370699s)
Nov  2 06:27:54.395: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (35.864822527s)
Nov  2 06:27:59.501: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (40.970478111s)
Nov  2 06:28:04.611: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 found and phase=Failed (46.081047403s)
Nov  2 06:28:09.721: INFO: PersistentVolume pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 was removed
Nov  2 06:28:09.721: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Nov  2 06:28:09.826: INFO: Claim "azuredisk-8081" in namespace "pvc-q9tr5" doesn't exist in the system
Nov  2 06:28:09.826: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-8djtp
Nov  2 06:28:09.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 134 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Nov  2 06:28:21.650: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-27pgj" in namespace "azuredisk-9947" to be "Error status code"
Nov  2 06:28:21.766: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 116.697951ms
Nov  2 06:28:23.872: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222182853s
Nov  2 06:28:25.978: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328100589s
Nov  2 06:28:28.097: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.44743185s
Nov  2 06:28:30.204: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.553993018s
Nov  2 06:28:32.309: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.659687765s
... skipping 19 lines ...
Nov  2 06:29:14.459: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Pending", Reason="", readiness=false. Elapsed: 52.80979922s
Nov  2 06:29:16.566: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Running", Reason="", readiness=true. Elapsed: 54.916204549s
Nov  2 06:29:18.673: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Running", Reason="", readiness=true. Elapsed: 57.022918941s
Nov  2 06:29:20.778: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Running", Reason="", readiness=true. Elapsed: 59.127950433s
Nov  2 06:29:22.884: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Running", Reason="", readiness=true. Elapsed: 1m1.23403737s
Nov  2 06:29:24.990: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.340073479s
Nov  2 06:29:27.096: INFO: Pod "azuredisk-volume-tester-27pgj": Phase="Failed", Reason="", readiness=false. Elapsed: 1m5.446150732s
STEP: Saw pod failure
Nov  2 06:29:27.096: INFO: Pod "azuredisk-volume-tester-27pgj" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Nov  2 06:29:27.288: INFO: deleting Pod "azuredisk-9947"/"azuredisk-volume-tester-27pgj"
Nov  2 06:29:27.424: INFO: Pod azuredisk-volume-tester-27pgj has the following logs: out-file : Access to the path 'C:\mnt\test-1\data' is denied.
At line:1 char:1
+ echo $null >> C:\mnt\test-1\data
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 7 lines ...
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 06:29:27.755: INFO: deleting PVC "azuredisk-9947"/"pvc-xdpwk"
Nov  2 06:29:27.755: INFO: Deleting PersistentVolumeClaim "pvc-xdpwk"
STEP: waiting for claim's PV "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" to be deleted
Nov  2 06:29:27.862: INFO: Waiting up to 10m0s for PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 to get deleted
Nov  2 06:29:27.971: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (108.074491ms)
Nov  2 06:29:33.080: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (5.217105949s)
Nov  2 06:29:38.187: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (10.324744382s)
Nov  2 06:29:43.297: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (15.434772682s)
Nov  2 06:29:48.403: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (20.540938016s)
Nov  2 06:29:53.509: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (25.646849367s)
Nov  2 06:29:58.616: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (30.753050655s)
Nov  2 06:30:03.727: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 found and phase=Failed (35.864643913s)
Nov  2 06:30:08.833: INFO: PersistentVolume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 was removed
Nov  2 06:30:08.834: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9947 to be removed
Nov  2 06:30:08.938: INFO: Claim "azuredisk-9947" in namespace "pvc-xdpwk" doesn't exist in the system
Nov  2 06:30:08.938: INFO: deleting StorageClass azuredisk-9947-kubernetes.io-azure-disk-dynamic-sc-sd22t
Nov  2 06:30:09.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9947" for this suite.
... skipping 47 lines ...
STEP: checking the PV
Nov  2 06:32:17.437: INFO: deleting PVC "azuredisk-5541"/"pvc-752k2"
Nov  2 06:32:17.437: INFO: Deleting PersistentVolumeClaim "pvc-752k2"
STEP: waiting for claim's PV "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" to be deleted
Nov  2 06:32:17.546: INFO: Waiting up to 10m0s for PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 to get deleted
Nov  2 06:32:17.651: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Bound (105.409929ms)
Nov  2 06:32:22.759: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (5.213549586s)
Nov  2 06:32:27.867: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (10.320807966s)
Nov  2 06:32:32.974: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (15.42854551s)
Nov  2 06:32:38.080: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (20.534146195s)
Nov  2 06:32:43.189: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (25.643413051s)
Nov  2 06:32:48.297: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (30.750943312s)
Nov  2 06:32:53.402: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (35.855867591s)
Nov  2 06:32:58.508: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (40.961823557s)
Nov  2 06:33:03.613: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 found and phase=Failed (46.066641674s)
Nov  2 06:33:08.718: INFO: PersistentVolume pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 was removed
Nov  2 06:33:08.718: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Nov  2 06:33:08.822: INFO: Claim "azuredisk-5541" in namespace "pvc-752k2" doesn't exist in the system
Nov  2 06:33:08.822: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-xdd4m
Nov  2 06:33:08.928: INFO: deleting Pod "azuredisk-5541"/"azuredisk-volume-tester-rchpv"
Nov  2 06:33:09.047: INFO: Pod azuredisk-volume-tester-rchpv has the following logs: 
... skipping 2 lines ...
STEP: checking the PV
Nov  2 06:33:09.365: INFO: deleting PVC "azuredisk-5541"/"pvc-zqs2c"
Nov  2 06:33:09.365: INFO: Deleting PersistentVolumeClaim "pvc-zqs2c"
STEP: waiting for claim's PV "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" to be deleted
Nov  2 06:33:09.472: INFO: Waiting up to 10m0s for PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 to get deleted
Nov  2 06:33:09.576: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Bound (104.682259ms)
Nov  2 06:33:14.683: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (5.211819359s)
Nov  2 06:33:19.792: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (10.320252861s)
Nov  2 06:33:24.897: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (15.425475326s)
Nov  2 06:33:30.002: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (20.530524613s)
Nov  2 06:33:35.109: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (25.637506135s)
Nov  2 06:33:40.216: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (30.744534816s)
Nov  2 06:33:45.322: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (35.850079768s)
Nov  2 06:33:50.430: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 found and phase=Failed (40.957989272s)
Nov  2 06:33:55.538: INFO: PersistentVolume pvc-5193a8d1-38ad-48ab-b02f-68a981313814 was removed
Nov  2 06:33:55.538: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Nov  2 06:33:55.642: INFO: Claim "azuredisk-5541" in namespace "pvc-zqs2c" doesn't exist in the system
Nov  2 06:33:55.642: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-rvccv
Nov  2 06:33:55.758: INFO: deleting Pod "azuredisk-5541"/"azuredisk-volume-tester-b5t4d"
Nov  2 06:33:55.910: INFO: Pod azuredisk-volume-tester-b5t4d has the following logs: 
... skipping 2 lines ...
STEP: checking the PV
Nov  2 06:33:56.228: INFO: deleting PVC "azuredisk-5541"/"pvc-mk64l"
Nov  2 06:33:56.228: INFO: Deleting PersistentVolumeClaim "pvc-mk64l"
STEP: waiting for claim's PV "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" to be deleted
Nov  2 06:33:56.334: INFO: Waiting up to 10m0s for PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 to get deleted
Nov  2 06:33:56.439: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Bound (104.609576ms)
Nov  2 06:34:01.545: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (5.210615453s)
Nov  2 06:34:06.651: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (10.316263226s)
Nov  2 06:34:11.758: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (15.423689426s)
Nov  2 06:34:16.864: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (20.529653187s)
Nov  2 06:34:21.971: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (25.636912972s)
Nov  2 06:34:27.079: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (30.744473s)
Nov  2 06:34:32.185: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (35.850516278s)
Nov  2 06:34:37.291: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 found and phase=Failed (40.956908102s)
Nov  2 06:34:42.397: INFO: PersistentVolume pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 was removed
Nov  2 06:34:42.397: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Nov  2 06:34:42.507: INFO: Claim "azuredisk-5541" in namespace "pvc-mk64l" doesn't exist in the system
Nov  2 06:34:42.507: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-wt4r8
Nov  2 06:34:42.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5541" for this suite.
... skipping 69 lines ...
STEP: checking the PV
Nov  2 06:37:06.130: INFO: deleting PVC "azuredisk-5356"/"pvc-l8s2c"
Nov  2 06:37:06.130: INFO: Deleting PersistentVolumeClaim "pvc-l8s2c"
STEP: waiting for claim's PV "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" to be deleted
Nov  2 06:37:06.238: INFO: Waiting up to 10m0s for PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 to get deleted
Nov  2 06:37:06.342: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Bound (104.494302ms)
Nov  2 06:37:11.448: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Failed (5.21044553s)
Nov  2 06:37:16.555: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Failed (10.317727313s)
Nov  2 06:37:21.663: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Failed (15.425813632s)
Nov  2 06:37:26.770: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Failed (20.532769441s)
Nov  2 06:37:31.879: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Failed (25.641117005s)
Nov  2 06:37:36.985: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 found and phase=Failed (30.74775547s)
Nov  2 06:37:42.095: INFO: PersistentVolume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 was removed
Nov  2 06:37:42.095: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Nov  2 06:37:42.199: INFO: Claim "azuredisk-5356" in namespace "pvc-l8s2c" doesn't exist in the system
Nov  2 06:37:42.199: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-grqjk
Nov  2 06:37:42.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov  2 06:38:09.682: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-ggrjx" in namespace "azuredisk-8510" to be "Succeeded or Failed"
Nov  2 06:38:09.788: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 106.113078ms
Nov  2 06:38:11.896: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213349359s
Nov  2 06:38:14.001: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318785621s
Nov  2 06:38:16.107: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425118079s
Nov  2 06:38:18.213: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 8.53069284s
Nov  2 06:38:20.318: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 10.636082494s
... skipping 23 lines ...
Nov  2 06:39:10.885: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.202736554s
Nov  2 06:39:12.993: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.310751409s
Nov  2 06:39:15.098: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.415848431s
Nov  2 06:39:17.204: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Running", Reason="", readiness=true. Elapsed: 1m7.522006446s
Nov  2 06:39:19.311: INFO: Pod "azuredisk-volume-tester-ggrjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m9.629095064s
STEP: Saw pod success
Nov  2 06:39:19.311: INFO: Pod "azuredisk-volume-tester-ggrjx" satisfied condition "Succeeded or Failed"
Nov  2 06:39:19.311: INFO: deleting Pod "azuredisk-8510"/"azuredisk-volume-tester-ggrjx"
Nov  2 06:39:19.485: INFO: Pod azuredisk-volume-tester-ggrjx has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-ggrjx in namespace azuredisk-8510
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 06:39:19.812: INFO: deleting PVC "azuredisk-8510"/"pvc-d8lnq"
Nov  2 06:39:19.812: INFO: Deleting PersistentVolumeClaim "pvc-d8lnq"
STEP: waiting for claim's PV "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" to be deleted
Nov  2 06:39:19.918: INFO: Waiting up to 10m0s for PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda to get deleted
Nov  2 06:39:20.030: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (112.268806ms)
Nov  2 06:39:25.140: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (5.221809183s)
Nov  2 06:39:30.252: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (10.333734318s)
Nov  2 06:39:35.358: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (15.43985442s)
Nov  2 06:39:40.463: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (20.54543694s)
Nov  2 06:39:45.569: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (25.651055154s)
Nov  2 06:39:50.674: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (30.756402672s)
Nov  2 06:39:55.780: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (35.861535169s)
Nov  2 06:40:00.886: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (40.967603572s)
Nov  2 06:40:05.992: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (46.074095524s)
Nov  2 06:40:11.099: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (51.18057756s)
Nov  2 06:40:16.207: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (56.288945743s)
Nov  2 06:40:21.314: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda found and phase=Failed (1m1.395739384s)
Nov  2 06:40:26.420: INFO: PersistentVolume pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda was removed
Nov  2 06:40:26.420: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8510 to be removed
Nov  2 06:40:26.525: INFO: Claim "azuredisk-8510" in namespace "pvc-d8lnq" doesn't exist in the system
Nov  2 06:40:26.525: INFO: deleting StorageClass azuredisk-8510-kubernetes.io-azure-disk-dynamic-sc-rzdgx
STEP: validating provisioned PV
STEP: checking the PV
... skipping 137 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Nov  2 06:40:58.165: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2nb8n" in namespace "azuredisk-953" to be "Succeeded or Failed"
Nov  2 06:40:58.273: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 108.585297ms
Nov  2 06:41:00.378: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213638396s
Nov  2 06:41:02.483: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.318687813s
Nov  2 06:41:04.593: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428569829s
Nov  2 06:41:06.700: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 8.535113401s
Nov  2 06:41:08.805: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640751296s
... skipping 35 lines ...
Nov  2 06:42:24.660: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.495766563s
Nov  2 06:42:26.769: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.604360732s
Nov  2 06:42:28.879: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Running", Reason="", readiness=true. Elapsed: 1m30.714046967s
Nov  2 06:42:30.984: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Running", Reason="", readiness=true. Elapsed: 1m32.819761708s
Nov  2 06:42:33.091: INFO: Pod "azuredisk-volume-tester-2nb8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m34.926504646s
STEP: Saw pod success
Nov  2 06:42:33.091: INFO: Pod "azuredisk-volume-tester-2nb8n" satisfied condition "Succeeded or Failed"
Nov  2 06:42:33.091: INFO: deleting Pod "azuredisk-953"/"azuredisk-volume-tester-2nb8n"
Nov  2 06:42:33.294: INFO: Pod azuredisk-volume-tester-2nb8n has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-2nb8n in namespace azuredisk-953
STEP: validating provisioned PV
STEP: checking the PV
Nov  2 06:42:33.615: INFO: deleting PVC "azuredisk-953"/"pvc-hbq4q"
Nov  2 06:42:33.615: INFO: Deleting PersistentVolumeClaim "pvc-hbq4q"
STEP: waiting for claim's PV "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" to be deleted
Nov  2 06:42:33.722: INFO: Waiting up to 10m0s for PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 to get deleted
Nov  2 06:42:33.825: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (103.759751ms)
Nov  2 06:42:38.934: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (5.21213783s)
Nov  2 06:42:44.061: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (10.33960035s)
Nov  2 06:42:49.170: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (15.448848611s)
Nov  2 06:42:54.280: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (20.558280295s)
Nov  2 06:42:59.391: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (25.669266084s)
Nov  2 06:43:04.499: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (30.777500042s)
Nov  2 06:43:09.605: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (35.883077844s)
Nov  2 06:43:14.711: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (40.989543399s)
Nov  2 06:43:19.821: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 found and phase=Failed (46.099071144s)
Nov  2 06:43:24.928: INFO: PersistentVolume pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 was removed
Nov  2 06:43:24.928: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-953 to be removed
Nov  2 06:43:25.032: INFO: Claim "azuredisk-953" in namespace "pvc-hbq4q" doesn't exist in the system
Nov  2 06:43:25.032: INFO: deleting StorageClass azuredisk-953-kubernetes.io-azure-disk-dynamic-sc-zfh8t
STEP: validating provisioned PV
STEP: checking the PV
... skipping 332 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160
STEP: Creating a kubernetes client
Nov  2 06:47:19.601: INFO: >>> kubeConfig: /root/tmp730255417/kubeconfig/kubeconfig.northeurope.json
STEP: Building a namespace api object, basename azuredisk
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in azuredisk-4885
STEP: Waiting for a default service account to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [1.051 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:160

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
... skipping 278 lines ...
I1102 06:21:12.533603       1 azure_securitygroupclient.go:63] Azure SecurityGroupsClient (read ops) using rate limit config: QPS=100, bucket=1000
I1102 06:21:12.533609       1 azure_securitygroupclient.go:66] Azure SecurityGroupsClient (write ops) using rate limit config: QPS=100, bucket=1000
I1102 06:21:12.533620       1 azure_publicipclient.go:63] Azure PublicIPAddressesClient (read ops) using rate limit config: QPS=100, bucket=1000
I1102 06:21:12.533641       1 azure_publicipclient.go:66] Azure PublicIPAddressesClient (write ops) using rate limit config: QPS=100, bucket=1000
I1102 06:21:12.533683       1 azure.go:741] Setting up informers for Azure cloud provider
I1102 06:21:12.536763       1 shared_informer.go:240] Waiting for caches to sync for tokens
W1102 06:21:12.590634       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I1102 06:21:12.590690       1 controllermanager.go:562] Starting "horizontalpodautoscaling"
I1102 06:21:12.610240       1 controllermanager.go:577] Started "horizontalpodautoscaling"
I1102 06:21:12.610272       1 controllermanager.go:562] Starting "cronjob"
I1102 06:21:12.610589       1 horizontal.go:169] Starting HPA controller
I1102 06:21:12.610869       1 shared_informer.go:240] Waiting for caches to sync for HPA
I1102 06:21:12.621185       1 controllermanager.go:577] Started "cronjob"
... skipping 115 lines ...
I1102 06:21:14.299371       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I1102 06:21:14.299534       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I1102 06:21:14.299867       1 controllermanager.go:577] Started "attachdetach"
I1102 06:21:14.300012       1 controllermanager.go:562] Starting "pvc-protection"
I1102 06:21:14.300169       1 attach_detach_controller.go:328] Starting attach detach controller
I1102 06:21:14.300335       1 shared_informer.go:240] Waiting for caches to sync for attach detach
W1102 06:21:14.300574       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k8s-master-23691125-0" does not exist
I1102 06:21:14.444777       1 controllermanager.go:577] Started "pvc-protection"
I1102 06:21:14.444807       1 controllermanager.go:562] Starting "ephemeral-volume"
I1102 06:21:14.444812       1 pvc_protection_controller.go:110] "Starting PVC protection controller"
I1102 06:21:14.444850       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
I1102 06:21:14.597039       1 controllermanager.go:577] Started "ephemeral-volume"
I1102 06:21:14.597259       1 controllermanager.go:562] Starting "endpoint"
... skipping 143 lines ...
I1102 06:21:17.620531       1 shared_informer.go:247] Caches are synced for garbage collector 
I1102 06:21:17.620887       1 garbagecollector.go:254] synced garbage collector
I1102 06:21:22.075002       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:21:27.075595       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:21:29.593210       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-787d7f6757" need=1 creating=1
I1102 06:21:29.599077       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d7f6757 to 1"
I1102 06:21:29.668234       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I1102 06:21:29.680830       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-autoscaler-87b67c5fd" need=1 creating=1
I1102 06:21:29.684340       1 event.go:291] "Event occurred" object="kube-system/coredns-autoscaler" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-autoscaler-87b67c5fd to 1"
I1102 06:21:29.744540       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns-autoscaler" err="Operation cannot be fulfilled on deployments.apps \"coredns-autoscaler\": the object has been modified; please apply your changes to the latest version and try again"
I1102 06:21:29.795725       1 event.go:291] "Event occurred" object="kube-system/coredns-787d7f6757" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d7f6757-w5fjl"
I1102 06:21:29.796439       1 event.go:291] "Event occurred" object="kube-system/coredns-autoscaler-87b67c5fd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-autoscaler-87b67c5fd-jd4px"
I1102 06:21:32.075817       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:21:33.676371       1 event.go:291] "Event occurred" object="kube-system/azure-ip-masq-agent" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azure-ip-masq-agent-rwvsd"
I1102 06:21:33.820023       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vrxvn"
I1102 06:21:37.079900       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:21:39.839500       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-67b6bdd56c to 1"
I1102 06:21:39.842193       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/metrics-server-67b6bdd56c" need=1 creating=1
I1102 06:21:39.859499       1 event.go:291] "Event occurred" object="kube-system/metrics-server-67b6bdd56c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-67b6bdd56c-lpqt2"
I1102 06:21:39.884294       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I1102 06:21:42.080200       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:21:47.081507       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
E1102 06:21:47.189376       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1102 06:21:47.663982       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1102 06:21:52.082365       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:21:57.083142       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:02.084212       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:07.084607       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:12.085683       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:17.086579       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
E1102 06:22:17.212950       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1102 06:22:17.704373       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1102 06:22:22.087818       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:27.088870       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:32.089913       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:37.090867       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:42.092202       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:47.092520       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
E1102 06:22:47.237933       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
W1102 06:22:47.743704       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I1102 06:22:52.093129       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:22:57.093514       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:02.094005       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:07.094381       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:12.094923       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:17.095788       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
... skipping 3 lines ...
I1102 06:23:37.099842       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:42.100766       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:47.101501       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:52.101920       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:23:57.102433       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:24:02.103207       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
W1102 06:24:04.543023       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="2369k8s000" does not exist
I1102 06:24:04.556067       1 ttl_controller.go:276] "Changed ttl annotation" node="2369k8s000" new_ttl="0s"
I1102 06:24:07.103854       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:24:07.103892       1 node_lifecycle_controller.go:770] Controller observed a new Node: "2369k8s000"
I1102 06:24:07.103905       1 controller_utils.go:172] Recording Registered Node 2369k8s000 in Controller event message for node 2369k8s000
I1102 06:24:07.104086       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::1
W1102 06:24:07.104274       1 node_lifecycle_controller.go:1013] Missing timestamp for Node 2369k8s000. Assuming now as a timestamp.
... skipping 4 lines ...
I1102 06:24:14.624867       1 controller.go:262] Node changes detected, triggering a full node sync on all loadbalancer services
I1102 06:24:14.624896       1 controller.go:730] Syncing backends for all LB services.
I1102 06:24:14.624905       1 controller.go:737] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I1102 06:24:17.114975       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:24:17.115176       1 node_lifecycle_controller.go:893] Node 2369k8s000 is healthy again, removing all taints
I1102 06:24:17.115222       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
W1102 06:24:21.090407       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="2369k8s001" does not exist
I1102 06:24:21.119221       1 ttl_controller.go:276] "Changed ttl annotation" node="2369k8s001" new_ttl="0s"
I1102 06:24:22.116145       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: northeurope::0
I1102 06:24:22.116190       1 node_lifecycle_controller.go:770] Controller observed a new Node: "2369k8s001"
I1102 06:24:22.116205       1 controller_utils.go:172] Recording Registered Node 2369k8s001 in Controller event message for node 2369k8s001
W1102 06:24:22.116612       1 node_lifecycle_controller.go:1013] Missing timestamp for Node 2369k8s001. Assuming now as a timestamp.
I1102 06:24:22.116654       1 node_lifecycle_controller.go:869] Node 2369k8s001 is NotReady as of 2021-11-02 06:24:22.116642995 +0000 UTC m=+191.651584978. Adding it to the Taint queue.
... skipping 20 lines ...
I1102 06:26:21.855432       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7") from node "2369k8s001" 
I1102 06:26:21.855564       1 event.go:291] "Event occurred" object="azuredisk-8081/azuredisk-volume-tester-vwmvx" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7\" "
I1102 06:27:18.513684       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-8081/pvc-q9tr5"
I1102 06:27:18.536495       1 pv_controller.go:640] volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" is released and reclaim policy "Delete" will be executed
I1102 06:27:18.542973       1 pv_controller.go:879] volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" entered phase "Released"
I1102 06:27:18.545503       1 pv_controller.go:1340] isVolumeReleased[pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7]: volume is released
I1102 06:27:18.574888       1 pv_controller.go:879] volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" entered phase "Failed"
E1102 06:27:18.574987       1 goroutinemap.go:150] Operation for "delete-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7[31666f58-7315-4023-83b4-e191e585ed8b]" failed. No retries permitted until 2021-11-02 06:27:19.074960154 +0000 UTC m=+368.609902237 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:27:18.575446       1 event.go:291] "Event occurred" object="pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted"
I1102 06:27:31.743599       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7") on node "2369k8s001" 
I1102 06:27:31.750170       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7") on node "2369k8s001" 
I1102 06:27:31.763321       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 from node "2369k8s001"
I1102 06:27:31.763384       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7"
I1102 06:27:31.763548       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7)
I1102 06:27:32.083985       1 pv_controller.go:1340] isVolumeReleased[pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7]: volume is released
E1102 06:27:32.084170       1 goroutinemap.go:150] Operation for "delete-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7[31666f58-7315-4023-83b4-e191e585ed8b]" failed. No retries permitted until 2021-11-02 06:27:33.084142888 +0000 UTC m=+382.619084971 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7) since it's in attaching or detaching state
I1102 06:27:47.086181       1 pv_controller.go:1340] isVolumeReleased[pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7]: volume is released
E1102 06:27:47.086252       1 goroutinemap.go:150] Operation for "delete-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7[31666f58-7315-4023-83b4-e191e585ed8b]" failed. No retries permitted until 2021-11-02 06:27:49.086231125 +0000 UTC m=+398.621173108 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7) since it's in attaching or detaching state
I1102 06:27:47.223240       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7) returned with <nil>
I1102 06:27:47.223302       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7) succeeded
I1102 06:27:47.223314       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7 was detached from node:2369k8s001
I1102 06:27:47.223339       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7") on node "2369k8s001" 
I1102 06:28:02.089149       1 pv_controller.go:1340] isVolumeReleased[pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7]: volume is released
I1102 06:28:07.323126       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7
I1102 06:28:07.323172       1 pv_controller.go:1435] volume "pvc-0e451e64-19a3-4dba-b09d-879c2b6827b7" deleted
I1102 06:28:07.361033       1 pv_controller_base.go:505] deletion of claim "azuredisk-8081/pvc-q9tr5" was already processed
E1102 06:28:19.886028       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-694/default: secrets "default-token-4gjg9" is forbidden: unable to create new content in namespace azuredisk-694 because it is being terminated
I1102 06:28:20.462192       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8081
I1102 06:28:21.483453       1 event.go:291] "Event occurred" object="azuredisk-9947/pvc-xdpwk" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
E1102 06:28:22.133664       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3274/default: secrets "default-token-pgbc7" is forbidden: unable to create new content in namespace azuredisk-3274 because it is being terminated
I1102 06:28:22.707082       1 namespace_controller.go:185] Namespace has been deleted azuredisk-1318
I1102 06:28:23.911609       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 StorageAccountType:StandardSSD_LRS Size:10
I1102 06:28:23.956259       1 pv_controller.go:1676] volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" provisioned for claim "azuredisk-9947/pvc-xdpwk"
I1102 06:28:23.956415       1 event.go:291] "Event occurred" object="azuredisk-9947/pvc-xdpwk" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 using kubernetes.io/azure-disk"
I1102 06:28:23.959138       1 pv_controller.go:879] volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" entered phase "Bound"
I1102 06:28:23.959172       1 pv_controller.go:982] volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" bound to claim "azuredisk-9947/pvc-xdpwk"
... skipping 10 lines ...
I1102 06:28:35.270549       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625") from node "2369k8s000" 
I1102 06:28:35.270835       1 event.go:291] "Event occurred" object="azuredisk-9947/azuredisk-volume-tester-27pgj" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-c6a21f67-cafd-459c-9035-92f0b4b87625\" "
I1102 06:29:27.816434       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-9947/pvc-xdpwk"
I1102 06:29:27.831142       1 pv_controller.go:640] volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" is released and reclaim policy "Delete" will be executed
I1102 06:29:27.835557       1 pv_controller.go:879] volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" entered phase "Released"
I1102 06:29:27.859877       1 pv_controller.go:1340] isVolumeReleased[pvc-c6a21f67-cafd-459c-9035-92f0b4b87625]: volume is released
I1102 06:29:27.893661       1 pv_controller.go:879] volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" entered phase "Failed"
E1102 06:29:27.894031       1 goroutinemap.go:150] Operation for "delete-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625[aa2a8309-14fe-4d5c-bda8-30de3907654c]" failed. No retries permitted until 2021-11-02 06:29:28.394008823 +0000 UTC m=+497.928950906 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted
I1102 06:29:27.894469       1 event.go:291] "Event occurred" object="pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted"
I1102 06:29:32.094363       1 pv_controller.go:1340] isVolumeReleased[pvc-c6a21f67-cafd-459c-9035-92f0b4b87625]: volume is released
E1102 06:29:32.139741       1 goroutinemap.go:150] Operation for "delete-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625[aa2a8309-14fe-4d5c-bda8-30de3907654c]" failed. No retries permitted until 2021-11-02 06:29:33.139717281 +0000 UTC m=+502.674659364 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted
I1102 06:29:35.536485       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625") on node "2369k8s000" 
I1102 06:29:35.548097       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625") on node "2369k8s000" 
I1102 06:29:35.562617       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 from node "2369k8s000"
I1102 06:29:35.562710       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625"
I1102 06:29:35.562732       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625)
I1102 06:29:47.118516       1 pv_controller.go:1340] isVolumeReleased[pvc-c6a21f67-cafd-459c-9035-92f0b4b87625]: volume is released
E1102 06:29:47.118590       1 goroutinemap.go:150] Operation for "delete-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625[aa2a8309-14fe-4d5c-bda8-30de3907654c]" failed. No retries permitted until 2021-11-02 06:29:49.118568365 +0000 UTC m=+518.653510348 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625) since it's in attaching or detaching state
I1102 06:29:51.019659       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625) returned with <nil>
I1102 06:29:51.019700       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625) succeeded
I1102 06:29:51.019711       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625 was detached from node:2369k8s000
I1102 06:29:51.019732       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-c6a21f67-cafd-459c-9035-92f0b4b87625" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625") on node "2369k8s000" 
I1102 06:30:02.095039       1 pv_controller.go:1340] isVolumeReleased[pvc-c6a21f67-cafd-459c-9035-92f0b4b87625]: volume is released
I1102 06:30:07.305539       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-c6a21f67-cafd-459c-9035-92f0b4b87625
... skipping 52 lines ...
I1102 06:32:21.283145       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-c974m" PVC="azuredisk-5541/pvc-752k2"
I1102 06:32:21.283201       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-752k2"
I1102 06:32:21.306204       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-5541/pvc-752k2"
I1102 06:32:21.316829       1 pv_controller.go:640] volume "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" is released and reclaim policy "Delete" will be executed
I1102 06:32:21.320974       1 pv_controller.go:879] volume "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" entered phase "Released"
I1102 06:32:21.336234       1 pv_controller.go:1340] isVolumeReleased[pvc-088cac6f-657e-4727-86f7-a94b4eb467b6]: volume is released
I1102 06:32:21.363740       1 pv_controller.go:879] volume "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" entered phase "Failed"
E1102 06:32:21.364968       1 goroutinemap.go:150] Operation for "delete-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6[e0f1de47-892e-4e9b-a2d5-fc2f6df2f390]" failed. No retries permitted until 2021-11-02 06:32:21.864938229 +0000 UTC m=+671.399880312 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:32:21.365221       1 event.go:291] "Event occurred" object="pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted"
I1102 06:32:32.117525       1 pv_controller.go:1340] isVolumeReleased[pvc-088cac6f-657e-4727-86f7-a94b4eb467b6]: volume is released
E1102 06:32:32.142865       1 goroutinemap.go:150] Operation for "delete-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6[e0f1de47-892e-4e9b-a2d5-fc2f6df2f390]" failed. No retries permitted until 2021-11-02 06:32:33.142842287 +0000 UTC m=+682.677784270 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:32:32.506844       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6") on node "2369k8s001" 
I1102 06:32:32.513974       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6") on node "2369k8s001" 
I1102 06:32:32.528773       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 from node "2369k8s001"
I1102 06:32:32.528815       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6"
I1102 06:32:32.528833       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6)
I1102 06:32:47.112820       1 pv_controller.go:1340] isVolumeReleased[pvc-088cac6f-657e-4727-86f7-a94b4eb467b6]: volume is released
E1102 06:32:47.112964       1 goroutinemap.go:150] Operation for "delete-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6[e0f1de47-892e-4e9b-a2d5-fc2f6df2f390]" failed. No retries permitted until 2021-11-02 06:32:49.112938429 +0000 UTC m=+698.647880512 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6) since it's in attaching or detaching state
I1102 06:32:47.946083       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6) returned with <nil>
I1102 06:32:47.946135       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6) succeeded
I1102 06:32:47.946146       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6 was detached from node:2369k8s001
I1102 06:32:47.946167       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-088cac6f-657e-4727-86f7-a94b4eb467b6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6") on node "2369k8s001" 
I1102 06:33:02.104207       1 pv_controller.go:1340] isVolumeReleased[pvc-088cac6f-657e-4727-86f7-a94b4eb467b6]: volume is released
I1102 06:33:07.350097       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-088cac6f-657e-4727-86f7-a94b4eb467b6
... skipping 6 lines ...
I1102 06:33:12.133889       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-rchpv" PVC="azuredisk-5541/pvc-zqs2c"
I1102 06:33:12.134077       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-zqs2c"
I1102 06:33:12.150534       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-5541/pvc-zqs2c"
I1102 06:33:12.161592       1 pv_controller.go:640] volume "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" is released and reclaim policy "Delete" will be executed
I1102 06:33:12.165579       1 pv_controller.go:879] volume "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" entered phase "Released"
I1102 06:33:12.177323       1 pv_controller.go:1340] isVolumeReleased[pvc-5193a8d1-38ad-48ab-b02f-68a981313814]: volume is released
I1102 06:33:12.219959       1 pv_controller.go:879] volume "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" entered phase "Failed"
E1102 06:33:12.220043       1 goroutinemap.go:150] Operation for "delete-pvc-5193a8d1-38ad-48ab-b02f-68a981313814[b8fbf8bc-a668-45b4-b5e2-55d79570b216]" failed. No retries permitted until 2021-11-02 06:33:12.72000608 +0000 UTC m=+722.254948163 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:33:12.220099       1 event.go:291] "Event occurred" object="pvc-5193a8d1-38ad-48ab-b02f-68a981313814" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted"
I1102 06:33:17.111851       1 pv_controller.go:1340] isVolumeReleased[pvc-5193a8d1-38ad-48ab-b02f-68a981313814]: volume is released
E1102 06:33:17.134791       1 goroutinemap.go:150] Operation for "delete-pvc-5193a8d1-38ad-48ab-b02f-68a981313814[b8fbf8bc-a668-45b4-b5e2-55d79570b216]" failed. No retries permitted until 2021-11-02 06:33:18.13477255 +0000 UTC m=+727.669714633 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:33:22.603139       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814") on node "2369k8s001" 
I1102 06:33:22.610418       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814") on node "2369k8s001" 
I1102 06:33:22.610490       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814 from node "2369k8s001"
I1102 06:33:22.610523       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814"
I1102 06:33:22.610540       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814)
I1102 06:33:32.104922       1 pv_controller.go:1340] isVolumeReleased[pvc-5193a8d1-38ad-48ab-b02f-68a981313814]: volume is released
E1102 06:33:32.105048       1 goroutinemap.go:150] Operation for "delete-pvc-5193a8d1-38ad-48ab-b02f-68a981313814[b8fbf8bc-a668-45b4-b5e2-55d79570b216]" failed. No retries permitted until 2021-11-02 06:33:34.105025344 +0000 UTC m=+743.639967427 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814) since it's in attaching or detaching state
I1102 06:33:38.089971       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814) returned with <nil>
I1102 06:33:38.090017       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814) succeeded
I1102 06:33:38.090028       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814 was detached from node:2369k8s001
I1102 06:33:38.090051       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-5193a8d1-38ad-48ab-b02f-68a981313814" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814") on node "2369k8s001" 
I1102 06:33:47.117066       1 pv_controller.go:1340] isVolumeReleased[pvc-5193a8d1-38ad-48ab-b02f-68a981313814]: volume is released
I1102 06:33:52.305479       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5193a8d1-38ad-48ab-b02f-68a981313814
... skipping 6 lines ...
I1102 06:33:58.483515       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-5541/azuredisk-volume-tester-b5t4d" PVC="azuredisk-5541/pvc-mk64l"
I1102 06:33:58.483561       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-5541/pvc-mk64l"
I1102 06:33:58.501833       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-5541/pvc-mk64l"
I1102 06:33:58.512246       1 pv_controller.go:640] volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" is released and reclaim policy "Delete" will be executed
I1102 06:33:58.516091       1 pv_controller.go:879] volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" entered phase "Released"
I1102 06:33:58.518814       1 pv_controller.go:1340] isVolumeReleased[pvc-b31b3717-51ca-4ef0-87af-d39f6f767323]: volume is released
I1102 06:33:58.546528       1 pv_controller.go:879] volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" entered phase "Failed"
I1102 06:33:58.547369       1 event.go:291] "Event occurred" object="pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted"
E1102 06:33:58.547591       1 goroutinemap.go:150] Operation for "delete-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323[11a1509c-0a5e-41ce-ac85-e74c690b45d4]" failed. No retries permitted until 2021-11-02 06:33:59.046611823 +0000 UTC m=+768.581553806 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted
I1102 06:34:02.106644       1 pv_controller.go:1340] isVolumeReleased[pvc-b31b3717-51ca-4ef0-87af-d39f6f767323]: volume is released
E1102 06:34:02.132139       1 goroutinemap.go:150] Operation for "delete-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323[11a1509c-0a5e-41ce-ac85-e74c690b45d4]" failed. No retries permitted until 2021-11-02 06:34:03.132121664 +0000 UTC m=+772.667063647 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted
I1102 06:34:06.156537       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323") on node "2369k8s000" 
I1102 06:34:06.165152       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323") on node "2369k8s000" 
I1102 06:34:06.180157       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 from node "2369k8s000"
I1102 06:34:06.283737       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323"
I1102 06:34:06.283770       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323)
I1102 06:34:17.124861       1 pv_controller.go:1340] isVolumeReleased[pvc-b31b3717-51ca-4ef0-87af-d39f6f767323]: volume is released
E1102 06:34:17.125330       1 goroutinemap.go:150] Operation for "delete-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323[11a1509c-0a5e-41ce-ac85-e74c690b45d4]" failed. No retries permitted until 2021-11-02 06:34:19.125303745 +0000 UTC m=+788.660245828 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323) since it's in attaching or detaching state
I1102 06:34:21.755727       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323) returned with <nil>
I1102 06:34:21.755788       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323) succeeded
I1102 06:34:21.755800       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323 was detached from node:2369k8s000
I1102 06:34:21.755826       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323") on node "2369k8s000" 
I1102 06:34:32.108451       1 pv_controller.go:1340] isVolumeReleased[pvc-b31b3717-51ca-4ef0-87af-d39f6f767323]: volume is released
I1102 06:34:37.355646       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b31b3717-51ca-4ef0-87af-d39f6f767323
I1102 06:34:37.355709       1 pv_controller.go:1435] volume "pvc-b31b3717-51ca-4ef0-87af-d39f6f767323" deleted
I1102 06:34:37.384859       1 pv_controller_base.go:505] deletion of claim "azuredisk-5541/pvc-mk64l" was already processed
I1102 06:34:45.464150       1 event.go:291] "Event occurred" object="azuredisk-5356/pvc-l8s2c" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 06:34:45.584188       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f" need=1 creating=1
I1102 06:34:45.585688       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-jkhkj" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azuredisk-volume-tester-jkhkj-6dd7f6fd5f to 1"
I1102 06:34:45.603187       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-jkhkj" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-jkhkj\": the object has been modified; please apply your changes to the latest version and try again"
I1102 06:34:45.619739       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-jkhkj-6dd7f6fd5f-5vptg"
I1102 06:34:47.977471       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 StorageAccountType:StandardSSD_LRS Size:10
I1102 06:34:48.005776       1 pv_controller.go:1676] volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" provisioned for claim "azuredisk-5356/pvc-l8s2c"
I1102 06:34:48.006014       1 event.go:291] "Event occurred" object="azuredisk-5356/pvc-l8s2c" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 using kubernetes.io/azure-disk"
I1102 06:34:48.038036       1 pv_controller.go:879] volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" entered phase "Bound"
I1102 06:34:48.038075       1 pv_controller.go:982] volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" bound to claim "azuredisk-5356/pvc-l8s2c"
I1102 06:34:48.063332       1 pv_controller.go:823] claim "azuredisk-5356/pvc-l8s2c" entered phase "Bound"
E1102 06:34:48.088577       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5541/default: secrets "default-token-w84f7" is forbidden: unable to create new content in namespace azuredisk-5541 because it is being terminated
I1102 06:34:48.674150       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") from node "2369k8s000" 
I1102 06:34:48.724768       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" to node "2369k8s000".
I1102 06:34:48.747414       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" lun 0 to node "2369k8s000".
I1102 06:34:48.747542       1 azure_controller_standard.go:93] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - attach disk(kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) with DiskEncryptionSetID()
I1102 06:34:53.435570       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5541
I1102 06:34:59.231496       1 azure_controller_standard.go:111] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - attach disk(kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) returned with <nil>
I1102 06:34:59.231555       1 attacher.go:89] Attach operation successful: volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" attached to node "2369k8s000".
I1102 06:34:59.231881       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") from node "2369k8s000" 
I1102 06:34:59.232322       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f-5vptg" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-5ec072c8-caab-44d4-ab61-0929d3732c89\" "
I1102 06:35:36.624767       1 replica_set.go:563] "Too few replicas" replicaSet="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f" need=1 creating=1
I1102 06:35:36.658507       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-jkhkj-6dd7f6fd5f-c9jd2"
W1102 06:35:36.738442       1 reconciler.go:376] Multi-Attach error for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") from node "2369k8s001" Volume is already used by pods azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f-5vptg on node 2369k8s000
I1102 06:35:36.738532       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f-c9jd2" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-5ec072c8-caab-44d4-ab61-0929d3732c89\" Volume is already used by pod(s) azuredisk-volume-tester-jkhkj-6dd7f6fd5f-5vptg"
I1102 06:35:46.425606       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") on node "2369k8s000" 
I1102 06:35:46.433557       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") on node "2369k8s000" 
I1102 06:35:46.449095       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 from node "2369k8s000"
I1102 06:35:46.449152       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89"
I1102 06:35:46.449186       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89)
I1102 06:36:01.897108       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) returned with <nil>
... skipping 20 lines ...
I1102 06:37:09.246702       1 pvc_protection_controller.go:303] "Pod uses PVC" pod="azuredisk-5356/azuredisk-volume-tester-jkhkj-6dd7f6fd5f-c9jd2" PVC="azuredisk-5356/pvc-l8s2c"
I1102 06:37:09.246943       1 pvc_protection_controller.go:181] "Keeping PVC because it is being used" PVC="azuredisk-5356/pvc-l8s2c"
I1102 06:37:09.269842       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-5356/pvc-l8s2c"
I1102 06:37:09.279704       1 pv_controller.go:640] volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" is released and reclaim policy "Delete" will be executed
I1102 06:37:09.282873       1 pv_controller.go:879] volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" entered phase "Released"
I1102 06:37:09.293047       1 pv_controller.go:1340] isVolumeReleased[pvc-5ec072c8-caab-44d4-ab61-0929d3732c89]: volume is released
I1102 06:37:09.333091       1 pv_controller.go:879] volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" entered phase "Failed"
E1102 06:37:09.333183       1 goroutinemap.go:150] Operation for "delete-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89[975356e5-bfef-45bc-a451-e4fa1bd44245]" failed. No retries permitted until 2021-11-02 06:37:09.833158653 +0000 UTC m=+959.368100636 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:37:09.333250       1 event.go:291] "Event occurred" object="pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted"
I1102 06:37:13.073372       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") on node "2369k8s001" 
I1102 06:37:13.082431       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") on node "2369k8s001" 
I1102 06:37:13.198364       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 from node "2369k8s001"
I1102 06:37:13.198414       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89"
I1102 06:37:13.198429       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89)
I1102 06:37:17.120241       1 pv_controller.go:1340] isVolumeReleased[pvc-5ec072c8-caab-44d4-ab61-0929d3732c89]: volume is released
E1102 06:37:17.120320       1 goroutinemap.go:150] Operation for "delete-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89[975356e5-bfef-45bc-a451-e4fa1bd44245]" failed. No retries permitted until 2021-11-02 06:37:18.120296531 +0000 UTC m=+967.655238614 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) since it's in attaching or detaching state
I1102 06:37:28.754390       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) returned with <nil>
I1102 06:37:28.754622       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89) succeeded
I1102 06:37:28.754788       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89 was detached from node:2369k8s001
I1102 06:37:28.754954       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-5ec072c8-caab-44d4-ab61-0929d3732c89" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89") on node "2369k8s001" 
I1102 06:37:32.115374       1 pv_controller.go:1340] isVolumeReleased[pvc-5ec072c8-caab-44d4-ab61-0929d3732c89]: volume is released
I1102 06:37:37.325043       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-5ec072c8-caab-44d4-ab61-0929d3732c89
... skipping 2 lines ...
I1102 06:37:47.153510       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-6e80435c-2014-40e9-aabf-021a3494f4d4 StorageAccountType:StandardSSD_LRS Size:10
I1102 06:37:47.184834       1 pv_controller.go:1676] volume "pvc-6e80435c-2014-40e9-aabf-021a3494f4d4" provisioned for claim "azuredisk-3090/pvc-zdqnj"
I1102 06:37:47.185183       1 event.go:291] "Event occurred" object="azuredisk-3090/pvc-zdqnj" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-6e80435c-2014-40e9-aabf-021a3494f4d4 using kubernetes.io/azure-disk"
I1102 06:37:47.196537       1 pv_controller.go:879] volume "pvc-6e80435c-2014-40e9-aabf-021a3494f4d4" entered phase "Bound"
I1102 06:37:47.196854       1 pv_controller.go:982] volume "pvc-6e80435c-2014-40e9-aabf-021a3494f4d4" bound to claim "azuredisk-3090/pvc-zdqnj"
I1102 06:37:47.214067       1 pv_controller.go:823] claim "azuredisk-3090/pvc-zdqnj" entered phase "Bound"
E1102 06:37:47.830021       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5356/default: secrets "default-token-lrf48" is forbidden: unable to create new content in namespace azuredisk-5356 because it is being terminated
I1102 06:37:49.429151       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-3090/pvc-zdqnj"
I1102 06:37:49.440930       1 pv_controller.go:640] volume "pvc-6e80435c-2014-40e9-aabf-021a3494f4d4" is released and reclaim policy "Delete" will be executed
I1102 06:37:49.449298       1 pv_controller.go:879] volume "pvc-6e80435c-2014-40e9-aabf-021a3494f4d4" entered phase "Released"
I1102 06:37:49.452628       1 pv_controller.go:1340] isVolumeReleased[pvc-6e80435c-2014-40e9-aabf-021a3494f4d4]: volume is released
I1102 06:37:52.891307       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5356
I1102 06:37:54.615782       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-6e80435c-2014-40e9-aabf-021a3494f4d4
I1102 06:37:54.615819       1 pv_controller.go:1435] volume "pvc-6e80435c-2014-40e9-aabf-021a3494f4d4" deleted
I1102 06:37:54.634667       1 pv_controller_base.go:505] deletion of claim "azuredisk-3090/pvc-zdqnj" was already processed
E1102 06:38:05.247827       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3090/default: secrets "default-token-fs8mr" is forbidden: unable to create new content in namespace azuredisk-3090 because it is being terminated
I1102 06:38:09.083825       1 event.go:291] "Event occurred" object="azuredisk-8510/pvc-bvrfg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 06:38:09.295333       1 event.go:291] "Event occurred" object="azuredisk-8510/pvc-cwj2s" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 06:38:09.511060       1 event.go:291] "Event occurred" object="azuredisk-8510/pvc-d8lnq" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
E1102 06:38:09.755681       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3721/default: secrets "default-token-lkh4t" is forbidden: unable to create new content in namespace azuredisk-3721 because it is being terminated
I1102 06:38:10.360856       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3090
I1102 06:38:12.190551       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3 StorageAccountType:StandardSSD_LRS Size:10
I1102 06:38:12.221651       1 pv_controller.go:1676] volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" provisioned for claim "azuredisk-8510/pvc-bvrfg"
I1102 06:38:12.221911       1 event.go:291] "Event occurred" object="azuredisk-8510/pvc-bvrfg" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3 using kubernetes.io/azure-disk"
I1102 06:38:12.225663       1 pv_controller.go:879] volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" entered phase "Bound"
I1102 06:38:12.225695       1 pv_controller.go:982] volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" bound to claim "azuredisk-8510/pvc-bvrfg"
... skipping 38 lines ...
I1102 06:38:44.572158       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda") from node "2369k8s001" 
I1102 06:38:44.572418       1 event.go:291] "Event occurred" object="azuredisk-8510/azuredisk-volume-tester-ggrjx" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda\" "
I1102 06:39:19.881600       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-8510/pvc-d8lnq"
I1102 06:39:19.895123       1 pv_controller.go:640] volume "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" is released and reclaim policy "Delete" will be executed
I1102 06:39:19.900510       1 pv_controller.go:879] volume "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" entered phase "Released"
I1102 06:39:19.902985       1 pv_controller.go:1340] isVolumeReleased[pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda]: volume is released
I1102 06:39:19.938554       1 pv_controller.go:879] volume "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" entered phase "Failed"
E1102 06:39:19.938641       1 goroutinemap.go:150] Operation for "delete-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda[b69b8328-fd92-46f3-a5aa-3b6c0c3d2697]" failed. No retries permitted until 2021-11-02 06:39:20.438618709 +0000 UTC m=+1089.973560792 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:39:19.939079       1 event.go:291] "Event occurred" object="pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted"
I1102 06:39:32.119708       1 pv_controller.go:1340] isVolumeReleased[pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda]: volume is released
E1102 06:39:32.154505       1 goroutinemap.go:150] Operation for "delete-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda[b69b8328-fd92-46f3-a5aa-3b6c0c3d2697]" failed. No retries permitted until 2021-11-02 06:39:33.154483258 +0000 UTC m=+1102.689425341 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:39:33.559582       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3") on node "2369k8s001" 
I1102 06:39:33.571813       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3") on node "2369k8s001" 
I1102 06:39:33.572542       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14") on node "2369k8s001" 
I1102 06:39:33.584179       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14") on node "2369k8s001" 
I1102 06:39:33.584810       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda") on node "2369k8s001" 
I1102 06:39:33.594606       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda") on node "2369k8s001" 
I1102 06:39:33.605637       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3 from node "2369k8s001"
I1102 06:39:33.605820       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14 from node "2369k8s001"
I1102 06:39:33.606523       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda from node "2369k8s001"
I1102 06:39:33.606314       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3"
I1102 06:39:33.606547       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3)
I1102 06:39:47.125026       1 pv_controller.go:1340] isVolumeReleased[pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda]: volume is released
E1102 06:39:47.157369       1 goroutinemap.go:150] Operation for "delete-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda[b69b8328-fd92-46f3-a5aa-3b6c0c3d2697]" failed. No retries permitted until 2021-11-02 06:39:49.157353981 +0000 UTC m=+1118.692295964 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:39:49.121174       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3) returned with <nil>
I1102 06:39:49.121275       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3) succeeded
I1102 06:39:49.121287       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3 was detached from node:2369k8s001
I1102 06:39:49.121640       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3") on node "2369k8s001" 
I1102 06:39:49.166762       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14"
I1102 06:39:49.166792       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14)
I1102 06:40:02.122205       1 pv_controller.go:1340] isVolumeReleased[pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda]: volume is released
E1102 06:40:02.146499       1 goroutinemap.go:150] Operation for "delete-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda[b69b8328-fd92-46f3-a5aa-3b6c0c3d2697]" failed. No retries permitted until 2021-11-02 06:40:06.146480994 +0000 UTC m=+1135.681422977 (durationBeforeRetry 4s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:40:04.570103       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14) returned with <nil>
I1102 06:40:04.570166       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14) succeeded
I1102 06:40:04.570177       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14 was detached from node:2369k8s001
I1102 06:40:04.570202       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-86fea015-662f-4fc6-8a2e-1dfbfd354e14") on node "2369k8s001" 
I1102 06:40:04.602366       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda"
I1102 06:40:04.602408       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-bcaedaa2-c5bc-4fd7-919b-caf01f80bbda)
... skipping 16 lines ...
I1102 06:40:37.774178       1 pv_controller.go:640] volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" is released and reclaim policy "Delete" will be executed
I1102 06:40:37.778641       1 pv_controller.go:879] volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" entered phase "Released"
I1102 06:40:37.792475       1 pv_controller.go:1340] isVolumeReleased[pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3]: volume is released
I1102 06:40:42.951568       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3
I1102 06:40:42.951959       1 pv_controller.go:1435] volume "pvc-b034d1b8-5281-4dcb-bd5e-74dfa11bd4b3" deleted
I1102 06:40:42.974675       1 pv_controller_base.go:505] deletion of claim "azuredisk-8510/pvc-bvrfg" was already processed
E1102 06:40:53.671911       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8510/default: secrets "default-token-dkscz" is forbidden: unable to create new content in namespace azuredisk-8510 because it is being terminated
I1102 06:40:57.581252       1 event.go:291] "Event occurred" object="azuredisk-953/pvc-wrhxb" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 06:40:57.794075       1 event.go:291] "Event occurred" object="azuredisk-953/pvc-rjwj6" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 06:40:58.005282       1 event.go:291] "Event occurred" object="azuredisk-953/pvc-hbq4q" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
I1102 06:40:58.957957       1 namespace_controller.go:185] Namespace has been deleted azuredisk-8510
I1102 06:41:00.532400       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 StorageAccountType:StandardSSD_LRS Size:10
I1102 06:41:00.540057       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e StorageAccountType:Premium_LRS Size:10
... skipping 41 lines ...
I1102 06:41:42.836454       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e") from node "2369k8s001" 
I1102 06:41:42.836666       1 event.go:291] "Event occurred" object="azuredisk-953/azuredisk-volume-tester-2nb8n" kind="Pod" apiVersion="v1" type="Normal" reason="SuccessfulAttachVolume" message="AttachVolume.Attach succeeded for volume \"pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e\" "
I1102 06:42:33.674886       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-953/pvc-hbq4q"
I1102 06:42:33.702674       1 pv_controller.go:640] volume "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" is released and reclaim policy "Delete" will be executed
I1102 06:42:33.706711       1 pv_controller.go:879] volume "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" entered phase "Released"
I1102 06:42:33.718144       1 pv_controller.go:1340] isVolumeReleased[pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5]: volume is released
I1102 06:42:33.746079       1 pv_controller.go:879] volume "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" entered phase "Failed"
E1102 06:42:33.746138       1 goroutinemap.go:150] Operation for "delete-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5[13670b4c-42cd-4327-bb05-f6fee9ae9c70]" failed. No retries permitted until 2021-11-02 06:42:34.246118107 +0000 UTC m=+1283.781060190 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:42:33.746410       1 event.go:291] "Event occurred" object="pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted"
I1102 06:42:43.991607       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5") on node "2369k8s001" 
I1102 06:42:44.015556       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-279006a1-1666-4b48-afff-3246d6313237" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237") on node "2369k8s001" 
I1102 06:42:44.022823       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-279006a1-1666-4b48-afff-3246d6313237" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237") on node "2369k8s001" 
I1102 06:42:44.023146       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5") on node "2369k8s001" 
I1102 06:42:44.036888       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e") on node "2369k8s001" 
I1102 06:42:44.041913       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237 from node "2369k8s001"
I1102 06:42:44.041956       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237"
I1102 06:42:44.041978       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237)
I1102 06:42:44.042246       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 from node "2369k8s001"
I1102 06:42:44.045530       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e") on node "2369k8s001" 
I1102 06:42:44.045577       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e from node "2369k8s001"
I1102 06:42:47.134842       1 pv_controller.go:1340] isVolumeReleased[pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5]: volume is released
E1102 06:42:47.179360       1 goroutinemap.go:150] Operation for "delete-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5[13670b4c-42cd-4327-bb05-f6fee9ae9c70]" failed. No retries permitted until 2021-11-02 06:42:48.179337251 +0000 UTC m=+1297.714279334 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s001), could not be deleted
I1102 06:42:59.477154       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237) returned with <nil>
I1102 06:42:59.477246       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237) succeeded
I1102 06:42:59.477256       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237 was detached from node:2369k8s001
I1102 06:42:59.477282       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-279006a1-1666-4b48-afff-3246d6313237" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237") on node "2369k8s001" 
I1102 06:42:59.536237       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5"
I1102 06:42:59.536281       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5)
I1102 06:43:02.131255       1 pv_controller.go:1340] isVolumeReleased[pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5]: volume is released
E1102 06:43:02.131481       1 goroutinemap.go:150] Operation for "delete-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5[13670b4c-42cd-4327-bb05-f6fee9ae9c70]" failed. No retries permitted until 2021-11-02 06:43:04.131303691 +0000 UTC m=+1313.666245774 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5) since it's in attaching or detaching state
I1102 06:43:14.917577       1 azure_controller_standard.go:184] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5) returned with <nil>
I1102 06:43:14.917647       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5) succeeded
I1102 06:43:14.917659       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5 was detached from node:2369k8s001
I1102 06:43:14.917687       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-e1f24840-931f-4407-b6c7-1ec6c52710a5") on node "2369k8s001" 
I1102 06:43:14.948407       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e"
I1102 06:43:14.948438       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-21083fc5-07ce-4a7e-9fd1-c60c4de9cf0e)
... skipping 17 lines ...
I1102 06:43:36.306060       1 pv_controller.go:879] volume "pvc-279006a1-1666-4b48-afff-3246d6313237" entered phase "Released"
I1102 06:43:36.311774       1 pv_controller.go:1340] isVolumeReleased[pvc-279006a1-1666-4b48-afff-3246d6313237]: volume is released
I1102 06:43:41.476333       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-279006a1-1666-4b48-afff-3246d6313237
I1102 06:43:41.476366       1 pv_controller.go:1435] volume "pvc-279006a1-1666-4b48-afff-3246d6313237" deleted
I1102 06:43:41.491864       1 pv_controller_base.go:505] deletion of claim "azuredisk-953/pvc-wrhxb" was already processed
I1102 06:43:53.959279       1 event.go:291] "Event occurred" object="azuredisk-552/pvc-vjrq4" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
E1102 06:43:54.574438       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3033/default: secrets "default-token-v7fvc" is forbidden: unable to create new content in namespace azuredisk-3033 because it is being terminated
I1102 06:43:56.388040       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7 StorageAccountType:Standard_LRS Size:10
I1102 06:43:56.427662       1 pv_controller.go:1676] volume "pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7" provisioned for claim "azuredisk-552/pvc-vjrq4"
I1102 06:43:56.427977       1 event.go:291] "Event occurred" object="azuredisk-552/pvc-vjrq4" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ProvisioningSucceeded" message="Successfully provisioned volume pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7 using kubernetes.io/azure-disk"
I1102 06:43:56.430962       1 pv_controller.go:879] volume "pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7" entered phase "Bound"
I1102 06:43:56.431000       1 pv_controller.go:982] volume "pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7" bound to claim "azuredisk-552/pvc-vjrq4"
I1102 06:43:56.474257       1 pv_controller.go:823] claim "azuredisk-552/pvc-vjrq4" entered phase "Bound"
E1102 06:43:56.839249       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9336/default: secrets "default-token-85qjn" is forbidden: unable to create new content in namespace azuredisk-9336 because it is being terminated
I1102 06:43:57.153391       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7") from node "2369k8s001" 
I1102 06:43:57.204841       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7" to node "2369k8s001".
I1102 06:43:57.242107       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7" lun 0 to node "2369k8s001".
I1102 06:43:57.242177       1 azure_controller_standard.go:93] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s001) - attach disk(kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-f40852b8-f726-47fe-aad8-6b1ba9b1d6b7) with DiskEncryptionSetID()
I1102 06:43:57.470034       1 namespace_controller.go:185] Namespace has been deleted azuredisk-953
I1102 06:43:59.684825       1 namespace_controller.go:185] Namespace has been deleted azuredisk-3033
... skipping 44 lines ...
I1102 06:47:11.485710       1 garbagecollector.go:580] "Deleting object" object="azuredisk-1351/azuredisk-volume-tester-fdvzd-0" objectUID=51cbb67e-0bff-43bd-913e-511e75336a3b kind="Pod" propagationPolicy=Background
I1102 06:47:11.494243       1 garbagecollector.go:580] "Deleting object" object="azuredisk-1351/azuredisk-volume-tester-fdvzd-68c687c98f" objectUID=fd0397f9-5a7c-417d-b56e-f5ac105b9d73 kind="ControllerRevision" propagationPolicy=Background
I1102 06:47:17.046266       1 pvc_protection_controller.go:291] "PVC is unused" PVC="azuredisk-1351/pvc-azuredisk-volume-tester-fdvzd-0"
I1102 06:47:17.061649       1 pv_controller.go:640] volume "pvc-d3f196c7-3e63-4487-8ca5-afc128654930" is released and reclaim policy "Delete" will be executed
I1102 06:47:17.067437       1 pv_controller.go:879] volume "pvc-d3f196c7-3e63-4487-8ca5-afc128654930" entered phase "Released"
I1102 06:47:17.078822       1 pv_controller.go:1340] isVolumeReleased[pvc-d3f196c7-3e63-4487-8ca5-afc128654930]: volume is released
I1102 06:47:17.171138       1 pv_controller.go:879] volume "pvc-d3f196c7-3e63-4487-8ca5-afc128654930" entered phase "Failed"
E1102 06:47:17.171562       1 goroutinemap.go:150] Operation for "delete-pvc-d3f196c7-3e63-4487-8ca5-afc128654930[d068dd21-a01e-46c4-8c87-873d5ad73833]" failed. No retries permitted until 2021-11-02 06:47:17.671538243 +0000 UTC m=+1567.206480226 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted
I1102 06:47:17.173090       1 event.go:291] "Event occurred" object="pvc-d3f196c7-3e63-4487-8ca5-afc128654930" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/virtualMachines/2369k8s000), could not be deleted"
E1102 06:47:17.180498       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1351/default: secrets "default-token-zvr2b" is forbidden: unable to create new content in namespace azuredisk-1351 because it is being terminated
I1102 06:47:18.098882       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-d3f196c7-3e63-4487-8ca5-afc128654930" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930") on node "2369k8s000" 
I1102 06:47:18.121618       1 operation_generator.go:1577] Verified volume is safe to detach for volume "pvc-d3f196c7-3e63-4487-8ca5-afc128654930" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930") on node "2369k8s000" 
I1102 06:47:18.135289       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930 from node "2369k8s000"
I1102 06:47:18.190437       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930"
I1102 06:47:18.190491       1 azure_controller_standard.go:166] azureDisk - update(kubetest-mxppcdwt): vm(2369k8s000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-mxppcdwt/providers/Microsoft.Compute/disks/kubetest-mxppcdwt-dynamic-pvc-d3f196c7-3e63-4487-8ca5-afc128654930)
E1102 06:47:19.202875       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5285/default: secrets "default-token-c8872" is forbidden: unable to create new content in namespace azuredisk-5285 because it is being terminated
E1102 06:47:23.617306       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3981/default: secrets "default-token-g2s9k" is forbidden: unable to create new content in namespace azuredisk-3981 because it is being terminated
I1102 06:47:24.344808       1 namespace_controller.go:185] Namespace has been deleted azuredisk-5285
E1102 06:47:24.727540       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-vxlnh" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
2021/11/02 06:47:26 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 9 of 53 Specs in 1282.135 seconds
SUCCESS! -- 9 Passed | 0 Failed | 0 Pending | 44 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 115 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 46 lines ...
     

   Flags

     -d

	   Detailed option, when used with list, dumps the json of the object





Write-Progress : Win32 internal error "Access is denied" 0x5 occurred while reading the console output buffer. Contact 

Microsoft Customer Support Services.
At 
C:\windows\system32\WindowsPowerShell\v1.0\Modules\Microsoft.PowerShell.Archive\Microsoft.PowerShell.Archive.psm1:1132 
char:9
+         Write-Progress -Activity $cmdletName -Status $status -Percent ...
+         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
... skipping 8 lines ...
Warning: Permanently added '2369k8s001' (ECDSA) to the list of known hosts.
Collecting logs for vm 2369k8s002
Testing connection to host 2369k8s002.
Warning: Permanently added 'kubetest-mxppcdwt.northeurope.cloudapp.azure.com,20.82.228.47' (ECDSA) to the list of known hosts.

Authorized uses only. All activity may be monitored and reported.
channel 0: open failed: connect failed: Temporary failure in name resolution

stdio forwarding failed

ssh_exchange_identification: Connection closed by remote host
2021/11/02 06:49:28 process.go:155: Step 'bash -c /root/tmp730255417/win-ci-logs-collector.sh kubetest-mxppcdwt.northeurope.cloudapp.azure.com /root/tmp730255417 /etc/ssh-key-secret/ssh-private' finished in 1m29.341611662s
2021/11/02 06:49:28 aksengine.go:1141: Deleting resource group: kubetest-mxppcdwt.