This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-07 20:14
Elapsed52m54s
Revision
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 633 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 189 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:33:04.648: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g6kmw" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  7 20:33:04.701: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 53.153046ms
Sep  7 20:33:06.755: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107155833s
Sep  7 20:33:08.810: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162078882s
Sep  7 20:33:10.864: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.215946268s
Sep  7 20:33:12.918: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.270324698s
Sep  7 20:33:14.973: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 10.325050459s
Sep  7 20:33:17.027: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 12.378881046s
Sep  7 20:33:19.082: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 14.433707586s
Sep  7 20:33:21.136: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 16.487864911s
Sep  7 20:33:23.194: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 18.545935068s
Sep  7 20:33:25.251: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.603002505s
Sep  7 20:33:27.308: INFO: Pod "azuredisk-volume-tester-g6kmw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.660214837s
STEP: Saw pod success
Sep  7 20:33:27.308: INFO: Pod "azuredisk-volume-tester-g6kmw" satisfied condition "Succeeded or Failed"
Sep  7 20:33:27.308: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-g6kmw"
Sep  7 20:33:27.375: INFO: Pod azuredisk-volume-tester-g6kmw has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-g6kmw in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:33:27.553: INFO: deleting PVC "azuredisk-8081"/"pvc-8z4xd"
Sep  7 20:33:27.553: INFO: Deleting PersistentVolumeClaim "pvc-8z4xd"
STEP: waiting for claim's PV "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" to be deleted
Sep  7 20:33:27.609: INFO: Waiting up to 10m0s for PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 to get deleted
Sep  7 20:33:27.673: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Released (63.492137ms)
Sep  7 20:33:32.727: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (5.117903542s)
Sep  7 20:33:37.783: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (10.173474945s)
Sep  7 20:33:42.839: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (15.229712162s)
Sep  7 20:33:47.896: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (20.286691047s)
Sep  7 20:33:52.951: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (25.341537331s)
Sep  7 20:33:58.008: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (30.398531606s)
Sep  7 20:34:03.062: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (35.453139258s)
Sep  7 20:34:08.119: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (40.509556038s)
Sep  7 20:34:13.175: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 found and phase=Failed (45.565609643s)
Sep  7 20:34:18.230: INFO: PersistentVolume pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 was removed
Sep  7 20:34:18.230: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep  7 20:34:18.284: INFO: Claim "azuredisk-8081" in namespace "pvc-8z4xd" doesn't exist in the system
Sep  7 20:34:18.284: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-4rmkx
Sep  7 20:34:18.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 80 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  7 20:34:37.550: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-fdfqb"
Sep  7 20:34:37.616: INFO: Error getting logs for pod azuredisk-volume-tester-fdfqb: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-fdfqb)
STEP: Deleting pod azuredisk-volume-tester-fdfqb in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:34:37.776: INFO: deleting PVC "azuredisk-5466"/"pvc-dhlxn"
Sep  7 20:34:37.776: INFO: Deleting PersistentVolumeClaim "pvc-dhlxn"
STEP: waiting for claim's PV "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" to be deleted
... skipping 18 lines ...
Sep  7 20:36:03.840: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Bound (1m26.008573565s)
Sep  7 20:36:08.894: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Bound (1m31.062962423s)
Sep  7 20:36:13.958: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Bound (1m36.127194647s)
Sep  7 20:36:19.015: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Bound (1m41.183546469s)
Sep  7 20:36:24.069: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Bound (1m46.237941939s)
Sep  7 20:36:29.126: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Bound (1m51.295482422s)
Sep  7 20:36:34.181: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Failed (1m56.350410174s)
Sep  7 20:36:39.236: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Failed (2m1.405036972s)
Sep  7 20:36:44.293: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Failed (2m6.462463036s)
Sep  7 20:36:49.349: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Failed (2m11.517781746s)
Sep  7 20:36:54.408: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Failed (2m16.576983263s)
Sep  7 20:36:59.462: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a found and phase=Failed (2m21.631218418s)
Sep  7 20:37:04.518: INFO: PersistentVolume pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a was removed
Sep  7 20:37:04.518: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5466 to be removed
Sep  7 20:37:04.571: INFO: Claim "azuredisk-5466" in namespace "pvc-dhlxn" doesn't exist in the system
Sep  7 20:37:04.572: INFO: deleting StorageClass azuredisk-5466-kubernetes.io-azure-disk-dynamic-sc-9p5xr
Sep  7 20:37:04.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5466" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:37:05.675: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-9nf2s" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  7 20:37:05.729: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 53.650356ms
Sep  7 20:37:07.783: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107997421s
Sep  7 20:37:09.838: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16278696s
Sep  7 20:37:11.893: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 6.217609185s
Sep  7 20:37:13.948: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 8.272441949s
Sep  7 20:37:16.003: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.327318628s
Sep  7 20:37:18.057: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 12.381968556s
Sep  7 20:37:20.112: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 14.436996107s
Sep  7 20:37:22.169: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 16.494009201s
Sep  7 20:37:24.227: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Pending", Reason="", readiness=false. Elapsed: 18.551115614s
Sep  7 20:37:26.285: INFO: Pod "azuredisk-volume-tester-9nf2s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.609217832s
STEP: Saw pod success
Sep  7 20:37:26.285: INFO: Pod "azuredisk-volume-tester-9nf2s" satisfied condition "Succeeded or Failed"
Sep  7 20:37:26.285: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-9nf2s"
Sep  7 20:37:26.348: INFO: Pod azuredisk-volume-tester-9nf2s has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-9nf2s in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:37:26.518: INFO: deleting PVC "azuredisk-2790"/"pvc-dz5cc"
Sep  7 20:37:26.518: INFO: Deleting PersistentVolumeClaim "pvc-dz5cc"
STEP: waiting for claim's PV "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" to be deleted
Sep  7 20:37:26.575: INFO: Waiting up to 10m0s for PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 to get deleted
Sep  7 20:37:26.628: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (52.788827ms)
Sep  7 20:37:31.685: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (5.11064284s)
Sep  7 20:37:36.743: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (10.168384035s)
Sep  7 20:37:41.799: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (15.223679373s)
Sep  7 20:37:46.856: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (20.280889599s)
Sep  7 20:37:51.910: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (25.334999958s)
Sep  7 20:37:56.968: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 found and phase=Failed (30.392948127s)
Sep  7 20:38:02.025: INFO: PersistentVolume pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7 was removed
Sep  7 20:38:02.025: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2790 to be removed
Sep  7 20:38:02.079: INFO: Claim "azuredisk-2790" in namespace "pvc-dz5cc" doesn't exist in the system
Sep  7 20:38:02.079: INFO: deleting StorageClass azuredisk-2790-kubernetes.io-azure-disk-dynamic-sc-8bws7
Sep  7 20:38:02.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-2790" for this suite.
... skipping 22 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  7 20:38:03.223: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-lffxs" in namespace "azuredisk-5356" to be "Error status code"
Sep  7 20:38:03.277: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 53.96819ms
Sep  7 20:38:05.331: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107855255s
Sep  7 20:38:07.384: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161391503s
Sep  7 20:38:09.442: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 6.218707231s
Sep  7 20:38:11.497: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 8.273999301s
Sep  7 20:38:13.552: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 10.32900415s
Sep  7 20:38:15.608: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 12.384510127s
Sep  7 20:38:17.663: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 14.4402059s
Sep  7 20:38:19.723: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 16.499785119s
Sep  7 20:38:21.780: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Pending", Reason="", readiness=false. Elapsed: 18.557022054s
Sep  7 20:38:23.839: INFO: Pod "azuredisk-volume-tester-lffxs": Phase="Failed", Reason="", readiness=false. Elapsed: 20.616056922s
STEP: Saw pod failure
Sep  7 20:38:23.839: INFO: Pod "azuredisk-volume-tester-lffxs" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  7 20:38:23.895: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-lffxs"
Sep  7 20:38:23.953: INFO: Pod azuredisk-volume-tester-lffxs has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-lffxs in namespace azuredisk-5356
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:38:24.132: INFO: deleting PVC "azuredisk-5356"/"pvc-zs5g6"
Sep  7 20:38:24.132: INFO: Deleting PersistentVolumeClaim "pvc-zs5g6"
STEP: waiting for claim's PV "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" to be deleted
Sep  7 20:38:24.187: INFO: Waiting up to 10m0s for PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 to get deleted
Sep  7 20:38:24.241: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (53.456174ms)
Sep  7 20:38:29.295: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (5.107522585s)
Sep  7 20:38:34.350: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (10.162972473s)
Sep  7 20:38:39.408: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (15.221109094s)
Sep  7 20:38:44.463: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (20.275965072s)
Sep  7 20:38:49.517: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (25.330253486s)
Sep  7 20:38:54.575: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (30.388225964s)
Sep  7 20:38:59.632: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 found and phase=Failed (35.44455206s)
Sep  7 20:39:04.687: INFO: PersistentVolume pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 was removed
Sep  7 20:39:04.687: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep  7 20:39:04.740: INFO: Claim "azuredisk-5356" in namespace "pvc-zs5g6" doesn't exist in the system
Sep  7 20:39:04.740: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-c7wcm
Sep  7 20:39:04.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 53 lines ...
Sep  7 20:40:07.962: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Bound (5.111819448s)
Sep  7 20:40:13.019: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Bound (10.169027324s)
Sep  7 20:40:18.076: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Bound (15.226087717s)
Sep  7 20:40:23.134: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Bound (20.283565215s)
Sep  7 20:40:28.189: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Bound (25.339043376s)
Sep  7 20:40:33.244: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Bound (30.393637093s)
Sep  7 20:40:38.302: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Failed (35.452248374s)
Sep  7 20:40:43.360: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Failed (40.509803569s)
Sep  7 20:40:48.414: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Failed (45.564259157s)
Sep  7 20:40:53.470: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Failed (50.619521255s)
Sep  7 20:40:58.525: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b found and phase=Failed (55.674609662s)
Sep  7 20:41:03.584: INFO: PersistentVolume pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b was removed
Sep  7 20:41:03.584: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 20:41:03.637: INFO: Claim "azuredisk-5194" in namespace "pvc-642qf" doesn't exist in the system
Sep  7 20:41:03.637: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-6sdvg
Sep  7 20:41:03.695: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-2nqr6"
Sep  7 20:41:03.753: INFO: Pod azuredisk-volume-tester-2nqr6 has the following logs: 
... skipping 8 lines ...
Sep  7 20:41:09.087: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Bound (5.11029618s)
Sep  7 20:41:14.143: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Bound (10.166173843s)
Sep  7 20:41:19.201: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Bound (15.224691283s)
Sep  7 20:41:24.256: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Bound (20.27973337s)
Sep  7 20:41:29.314: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Bound (25.337593166s)
Sep  7 20:41:34.374: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Bound (30.397402351s)
Sep  7 20:41:39.432: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (35.455884828s)
Sep  7 20:41:44.487: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (40.510614818s)
Sep  7 20:41:49.544: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (45.567100469s)
Sep  7 20:41:54.599: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (50.621941531s)
Sep  7 20:41:59.656: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (55.679764516s)
Sep  7 20:42:04.711: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (1m0.734522054s)
Sep  7 20:42:09.767: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (1m5.790579468s)
Sep  7 20:42:14.824: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 found and phase=Failed (1m10.847077287s)
Sep  7 20:42:19.878: INFO: PersistentVolume pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 was removed
Sep  7 20:42:19.878: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 20:42:19.931: INFO: Claim "azuredisk-5194" in namespace "pvc-wtxnj" doesn't exist in the system
Sep  7 20:42:19.931: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-blhf4
Sep  7 20:42:19.988: INFO: deleting Pod "azuredisk-5194"/"azuredisk-volume-tester-h5tqp"
Sep  7 20:42:20.065: INFO: Pod azuredisk-volume-tester-h5tqp has the following logs: 
... skipping 8 lines ...
Sep  7 20:42:25.397: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Bound (5.107404185s)
Sep  7 20:42:30.453: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Bound (10.162876471s)
Sep  7 20:42:35.510: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Bound (15.220311954s)
Sep  7 20:42:40.566: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Bound (20.276033942s)
Sep  7 20:42:45.620: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Bound (25.330284335s)
Sep  7 20:42:50.674: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Bound (30.38401533s)
Sep  7 20:42:55.729: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (35.43854648s)
Sep  7 20:43:00.786: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (40.496495266s)
Sep  7 20:43:05.843: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (45.553109938s)
Sep  7 20:43:10.901: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (50.610563664s)
Sep  7 20:43:15.954: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (55.664354171s)
Sep  7 20:43:21.008: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (1m0.718061159s)
Sep  7 20:43:26.063: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa found and phase=Failed (1m5.772886993s)
Sep  7 20:43:31.121: INFO: PersistentVolume pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa was removed
Sep  7 20:43:31.121: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5194 to be removed
Sep  7 20:43:31.175: INFO: Claim "azuredisk-5194" in namespace "pvc-s8k78" doesn't exist in the system
Sep  7 20:43:31.175: INFO: deleting StorageClass azuredisk-5194-kubernetes.io-azure-disk-dynamic-sc-9g9wm
Sep  7 20:43:31.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5194" for this suite.
... skipping 59 lines ...
Sep  7 20:45:10.195: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Bound (5.109217472s)
Sep  7 20:45:15.251: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Bound (10.165551229s)
Sep  7 20:45:20.309: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Bound (15.223488325s)
Sep  7 20:45:25.364: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Bound (20.278451857s)
Sep  7 20:45:30.418: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Bound (25.332657264s)
Sep  7 20:45:35.480: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Bound (30.394091033s)
Sep  7 20:45:40.538: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (35.45207148s)
Sep  7 20:45:45.592: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (40.506784043s)
Sep  7 20:45:50.652: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (45.566145241s)
Sep  7 20:45:55.706: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (50.620483567s)
Sep  7 20:46:00.763: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (55.677532349s)
Sep  7 20:46:05.822: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (1m0.736439485s)
Sep  7 20:46:10.880: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 found and phase=Failed (1m5.79441026s)
Sep  7 20:46:15.937: INFO: PersistentVolume pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 was removed
Sep  7 20:46:15.937: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-1353 to be removed
Sep  7 20:46:15.991: INFO: Claim "azuredisk-1353" in namespace "pvc-cxt2s" doesn't exist in the system
Sep  7 20:46:15.991: INFO: deleting StorageClass azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-x4lpq
Sep  7 20:46:16.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-1353" for this suite.
... skipping 161 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:46:35.314: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-nvdvq" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  7 20:46:35.368: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 54.276427ms
Sep  7 20:46:37.423: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109690882s
Sep  7 20:46:39.482: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168718249s
Sep  7 20:46:41.540: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.226214611s
Sep  7 20:46:43.598: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 8.28399478s
Sep  7 20:46:45.655: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 10.34176346s
... skipping 15 lines ...
Sep  7 20:47:18.581: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 43.267522024s
Sep  7 20:47:20.640: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 45.325968386s
Sep  7 20:47:22.698: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 47.384574258s
Sep  7 20:47:24.756: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Pending", Reason="", readiness=false. Elapsed: 49.441800691s
Sep  7 20:47:26.814: INFO: Pod "azuredisk-volume-tester-nvdvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 51.500362506s
STEP: Saw pod success
Sep  7 20:47:26.814: INFO: Pod "azuredisk-volume-tester-nvdvq" satisfied condition "Succeeded or Failed"
Sep  7 20:47:26.814: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-nvdvq"
Sep  7 20:47:26.882: INFO: Pod azuredisk-volume-tester-nvdvq has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-nvdvq in namespace azuredisk-59
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:47:27.053: INFO: deleting PVC "azuredisk-59"/"pvc-n764p"
Sep  7 20:47:27.053: INFO: Deleting PersistentVolumeClaim "pvc-n764p"
STEP: waiting for claim's PV "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" to be deleted
Sep  7 20:47:27.108: INFO: Waiting up to 10m0s for PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c to get deleted
Sep  7 20:47:27.162: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Released (53.86546ms)
Sep  7 20:47:32.216: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (5.108653316s)
Sep  7 20:47:37.271: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (10.16351886s)
Sep  7 20:47:42.333: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (15.224720932s)
Sep  7 20:47:47.391: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (20.283072685s)
Sep  7 20:47:52.449: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (25.340966505s)
Sep  7 20:47:57.506: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (30.398111357s)
Sep  7 20:48:02.564: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (35.456434879s)
Sep  7 20:48:07.619: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (40.511613424s)
Sep  7 20:48:12.678: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (45.570479747s)
Sep  7 20:48:17.737: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (50.629689925s)
Sep  7 20:48:22.793: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (55.685448761s)
Sep  7 20:48:27.851: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m0.743390433s)
Sep  7 20:48:32.906: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m5.798469249s)
Sep  7 20:48:37.964: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m10.856325132s)
Sep  7 20:48:43.020: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m15.912423405s)
Sep  7 20:48:48.078: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m20.96991891s)
Sep  7 20:48:53.132: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m26.024110626s)
Sep  7 20:48:58.189: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m31.081673215s)
Sep  7 20:49:03.247: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m36.138751975s)
Sep  7 20:49:08.303: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m41.194837402s)
Sep  7 20:49:13.358: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c found and phase=Failed (1m46.250411255s)
Sep  7 20:49:18.416: INFO: PersistentVolume pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c was removed
Sep  7 20:49:18.416: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-59 to be removed
Sep  7 20:49:18.469: INFO: Claim "azuredisk-59" in namespace "pvc-n764p" doesn't exist in the system
Sep  7 20:49:18.469: INFO: deleting StorageClass azuredisk-59-kubernetes.io-azure-disk-dynamic-sc-stqsf
STEP: validating provisioned PV
STEP: checking the PV
... skipping 51 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:49:40.509: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-2ldx4" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  7 20:49:40.562: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.766133ms
Sep  7 20:49:42.616: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107326413s
Sep  7 20:49:44.689: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180403241s
Sep  7 20:49:46.744: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.234767634s
Sep  7 20:49:48.797: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.288455638s
Sep  7 20:49:50.851: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.342364199s
... skipping 4 lines ...
Sep  7 20:50:01.126: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.617442302s
Sep  7 20:50:03.182: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.672579136s
Sep  7 20:50:05.236: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.727126891s
Sep  7 20:50:07.293: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.784119104s
Sep  7 20:50:09.350: INFO: Pod "azuredisk-volume-tester-2ldx4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.840817315s
STEP: Saw pod success
Sep  7 20:50:09.350: INFO: Pod "azuredisk-volume-tester-2ldx4" satisfied condition "Succeeded or Failed"
Sep  7 20:50:09.350: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-2ldx4"
Sep  7 20:50:09.422: INFO: Pod azuredisk-volume-tester-2ldx4 has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.068989 seconds, 1.4GB/s
hello world

STEP: Deleting pod azuredisk-volume-tester-2ldx4 in namespace azuredisk-2546
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:50:09.635: INFO: deleting PVC "azuredisk-2546"/"pvc-n7dh7"
Sep  7 20:50:09.635: INFO: Deleting PersistentVolumeClaim "pvc-n7dh7"
STEP: waiting for claim's PV "pvc-c18050d4-e577-47c9-955e-4f8784c48835" to be deleted
Sep  7 20:50:09.690: INFO: Waiting up to 10m0s for PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 to get deleted
Sep  7 20:50:09.743: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (52.897169ms)
Sep  7 20:50:14.798: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (5.108112008s)
Sep  7 20:50:19.853: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (10.162488365s)
Sep  7 20:50:24.909: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (15.219233666s)
Sep  7 20:50:29.966: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (20.276112314s)
Sep  7 20:50:35.023: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (25.332544401s)
Sep  7 20:50:40.079: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (30.388322136s)
Sep  7 20:50:45.134: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (35.443922885s)
Sep  7 20:50:50.192: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (40.502173144s)
Sep  7 20:50:55.246: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (45.556167272s)
Sep  7 20:51:00.303: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 found and phase=Failed (50.612707416s)
Sep  7 20:51:05.358: INFO: PersistentVolume pvc-c18050d4-e577-47c9-955e-4f8784c48835 was removed
Sep  7 20:51:05.359: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-2546 to be removed
Sep  7 20:51:05.412: INFO: Claim "azuredisk-2546" in namespace "pvc-n7dh7" doesn't exist in the system
Sep  7 20:51:05.412: INFO: deleting StorageClass azuredisk-2546-kubernetes.io-azure-disk-dynamic-sc-gzmjn
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  7 20:51:18.781: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-g7msh" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  7 20:51:18.835: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 53.781073ms
Sep  7 20:51:20.890: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108087764s
Sep  7 20:51:22.947: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165267485s
Sep  7 20:51:25.003: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.221737742s
Sep  7 20:51:27.061: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.279116732s
Sep  7 20:51:29.118: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 10.3360943s
... skipping 11 lines ...
Sep  7 20:51:53.827: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 35.04589865s
Sep  7 20:51:55.885: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 37.103675926s
Sep  7 20:51:57.942: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 39.160303402s
Sep  7 20:51:59.998: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Pending", Reason="", readiness=false. Elapsed: 41.216912543s
Sep  7 20:52:02.056: INFO: Pod "azuredisk-volume-tester-g7msh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.274603139s
STEP: Saw pod success
Sep  7 20:52:02.056: INFO: Pod "azuredisk-volume-tester-g7msh" satisfied condition "Succeeded or Failed"
Sep  7 20:52:02.056: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-g7msh"
Sep  7 20:52:02.121: INFO: Pod azuredisk-volume-tester-g7msh has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-g7msh in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:52:02.343: INFO: deleting PVC "azuredisk-8582"/"pvc-px5qh"
Sep  7 20:52:02.343: INFO: Deleting PersistentVolumeClaim "pvc-px5qh"
STEP: waiting for claim's PV "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" to be deleted
Sep  7 20:52:02.399: INFO: Waiting up to 10m0s for PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 to get deleted
Sep  7 20:52:02.647: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 found and phase=Bound (247.559057ms)
Sep  7 20:52:07.701: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 found and phase=Failed (5.301414483s)
Sep  7 20:52:12.755: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 found and phase=Failed (10.355538792s)
Sep  7 20:52:17.812: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 found and phase=Failed (15.413217416s)
Sep  7 20:52:22.870: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 found and phase=Failed (20.471083964s)
Sep  7 20:52:27.924: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 found and phase=Failed (25.524766941s)
Sep  7 20:52:32.978: INFO: PersistentVolume pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 was removed
Sep  7 20:52:32.978: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 20:52:33.031: INFO: Claim "azuredisk-8582" in namespace "pvc-px5qh" doesn't exist in the system
Sep  7 20:52:33.031: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-gr9gx
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:52:33.193: INFO: deleting PVC "azuredisk-8582"/"pvc-ndw5c"
Sep  7 20:52:33.193: INFO: Deleting PersistentVolumeClaim "pvc-ndw5c"
STEP: waiting for claim's PV "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" to be deleted
Sep  7 20:52:33.247: INFO: Waiting up to 10m0s for PersistentVolume pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37 to get deleted
Sep  7 20:52:33.300: INFO: PersistentVolume pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37 found and phase=Failed (52.979808ms)
Sep  7 20:52:38.355: INFO: PersistentVolume pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37 found and phase=Failed (5.107837126s)
Sep  7 20:52:43.411: INFO: PersistentVolume pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37 found and phase=Failed (10.163655974s)
Sep  7 20:52:48.549: INFO: PersistentVolume pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37 was removed
Sep  7 20:52:48.549: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 20:52:48.603: INFO: Claim "azuredisk-8582" in namespace "pvc-ndw5c" doesn't exist in the system
Sep  7 20:52:48.603: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-5qnpd
STEP: validating provisioned PV
STEP: checking the PV
Sep  7 20:52:48.775: INFO: deleting PVC "azuredisk-8582"/"pvc-f62bz"
Sep  7 20:52:48.775: INFO: Deleting PersistentVolumeClaim "pvc-f62bz"
STEP: waiting for claim's PV "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" to be deleted
Sep  7 20:52:48.832: INFO: Waiting up to 10m0s for PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b to get deleted
Sep  7 20:52:48.886: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b found and phase=Failed (53.487631ms)
Sep  7 20:52:53.943: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b found and phase=Failed (5.110879391s)
Sep  7 20:52:58.996: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b found and phase=Failed (10.16434407s)
Sep  7 20:53:04.054: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b found and phase=Failed (15.221904778s)
Sep  7 20:53:09.112: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b found and phase=Failed (20.279488357s)
Sep  7 20:53:14.167: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b found and phase=Failed (25.334789718s)
Sep  7 20:53:19.224: INFO: PersistentVolume pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b was removed
Sep  7 20:53:19.224: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8582 to be removed
Sep  7 20:53:19.277: INFO: Claim "azuredisk-8582" in namespace "pvc-f62bz" doesn't exist in the system
Sep  7 20:53:19.278: INFO: deleting StorageClass azuredisk-8582-kubernetes.io-azure-disk-dynamic-sc-z9qnh
Sep  7 20:53:19.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8582" for this suite.
... skipping 379 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:164
STEP: Creating a kubernetes client
Sep  7 20:56:28.158: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.506 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:164

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 247 lines ...
I0907 20:27:44.365358       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662582464\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662582463\" (2022-09-07 19:27:43 +0000 UTC to 2023-09-07 19:27:43 +0000 UTC (now=2022-09-07 20:27:44.365326333 +0000 UTC))"
I0907 20:27:44.365532       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0907 20:27:44.365529       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0907 20:27:44.365491       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0907 20:27:44.365560       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0907 20:27:44.366418       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0907 20:27:46.772364       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0907 20:27:46.772391       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0907 20:27:50.463697       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0907 20:27:50.464321       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-l75yso-control-plane-9pv9f_77dc0e9e-2d46-4c47-8787-15d51bcb5680 became leader"
W0907 20:27:50.621438       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0907 20:27:50.622137       1 azure_auth.go:232] Using AzurePublicCloud environment
I0907 20:27:50.622182       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0907 20:27:50.622300       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 31 lines ...
I0907 20:27:50.623885       1 reflector.go:219] Starting reflector *v1.ServiceAccount (14h27m33.449301068s) from k8s.io/client-go/informers/factory.go:134
I0907 20:27:50.624944       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0907 20:27:50.628319       1 reflector.go:219] Starting reflector *v1.Node (14h27m33.449301068s) from k8s.io/client-go/informers/factory.go:134
I0907 20:27:50.628586       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
I0907 20:27:50.723695       1 shared_informer.go:270] caches populated
I0907 20:27:50.723724       1 shared_informer.go:247] Caches are synced for tokens 
W0907 20:27:50.918653       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0907 20:27:50.918677       1 controllermanager.go:562] Starting "pvc-protection"
I0907 20:27:50.924213       1 controllermanager.go:577] Started "pvc-protection"
I0907 20:27:50.924492       1 controllermanager.go:562] Starting "resourcequota"
I0907 20:27:50.924436       1 pvc_protection_controller.go:110] "Starting PVC protection controller"
I0907 20:27:50.924758       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
I0907 20:27:50.963032       1 resource_quota_monitor.go:177] QuotaMonitor using a shared informer for resource "/v1, Resource=serviceaccounts"
... skipping 75 lines ...
I0907 20:27:51.049428       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0907 20:27:51.049519       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 20:27:51.049635       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0907 20:27:51.049850       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0907 20:27:51.050047       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0907 20:27:51.050168       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0907 20:27:51.050337       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 20:27:51.050437       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 20:27:51.050701       1 controllermanager.go:577] Started "attachdetach"
I0907 20:27:51.050881       1 controllermanager.go:562] Starting "endpoint"
I0907 20:27:51.050834       1 attach_detach_controller.go:328] Starting attach detach controller
I0907 20:27:51.051234       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0907 20:27:51.061843       1 controllermanager.go:577] Started "endpoint"
... skipping 23 lines ...
I0907 20:27:51.651340       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0907 20:27:51.651377       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0907 20:27:51.651468       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0907 20:27:51.651558       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0907 20:27:51.651644       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0907 20:27:51.651715       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0907 20:27:51.651822       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0907 20:27:51.651918       1 plugins.go:643] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0907 20:27:51.652081       1 controllermanager.go:577] Started "persistentvolume-binder"
I0907 20:27:51.652219       1 controllermanager.go:562] Starting "replicationcontroller"
I0907 20:27:51.652161       1 pv_controller_base.go:308] Starting persistent volume controller
I0907 20:27:51.652396       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0907 20:27:51.801182       1 controllermanager.go:577] Started "replicationcontroller"
... skipping 21 lines ...
I0907 20:27:52.151112       1 shared_informer.go:247] Caches are synced for TTL 
I0907 20:27:52.300884       1 controllermanager.go:577] Started "root-ca-cert-publisher"
I0907 20:27:52.300922       1 controllermanager.go:562] Starting "garbagecollector"
I0907 20:27:52.300956       1 publisher.go:107] Starting root CA certificate configmap publisher
I0907 20:27:52.300980       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0907 20:27:52.463117       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-control-plane-9pv9f"
W0907 20:27:52.463721       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-l75yso-control-plane-9pv9f" does not exist
I0907 20:27:52.478270       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-control-plane-9pv9f"
I0907 20:27:52.547889       1 request.go:597] Waited for 81.504586ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller
I0907 20:27:52.597954       1 request.go:597] Waited for 85.087573ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/generic-garbage-collector
I0907 20:27:52.600484       1 garbagecollector.go:142] Starting garbage collector controller
I0907 20:27:52.600647       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0907 20:27:52.600783       1 graph_builder.go:273] garbage controller monitor not synced: no monitors
... skipping 448 lines ...
I0907 20:27:57.828167       1 replica_set.go:563] "Too few replicas" replicaSet="kube-system/coredns-78fcd69978" need=2 creating=2
I0907 20:27:57.828832       1 deployment_controller.go:215] "ReplicaSet added" replicaSet="kube-system/coredns-78fcd69978"
I0907 20:27:57.832824       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I0907 20:27:57.859616       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0907 20:27:57.859943       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 20:27:57.832588041 +0000 UTC m=+14.833966688 - now: 2022-09-07 20:27:57.859934949 +0000 UTC m=+14.861313596]
I0907 20:27:57.875505       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="69.871266ms"
I0907 20:27:57.875640       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:27:57.875683       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:27:57.875662297 +0000 UTC m=+14.877040944"
I0907 20:27:57.878197       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2022-09-07 20:27:57 +0000 UTC - now: 2022-09-07 20:27:57.878171788 +0000 UTC m=+14.879550335]
I0907 20:27:57.883519       1 endpointslicemirroring_controller.go:274] syncEndpoints("kube-system/kube-dns")
I0907 20:27:57.883539       1 endpointslicemirroring_controller.go:309] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
I0907 20:27:57.883556       1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (53.499µs)
I0907 20:27:57.893270       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (65.76238ms)
... skipping 123 lines ...
I0907 20:28:02.514859       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/calico-kube-controllers-969cf87c4-z5zrg"
I0907 20:28:02.514874       1 disruption.go:415] addPod called on pod "calico-kube-controllers-969cf87c4-z5zrg"
I0907 20:28:02.514207       1 event.go:291] "Event occurred" object="kube-system/calico-kube-controllers-969cf87c4" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-969cf87c4-z5zrg"
I0907 20:28:02.520163       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-969cf87c4-z5zrg, PodDisruptionBudget controller will avoid syncing.
I0907 20:28:02.520172       1 disruption.go:418] No matching pdb for pod "calico-kube-controllers-969cf87c4-z5zrg"
I0907 20:28:02.523008       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="20.527118ms"
I0907 20:28:02.523175       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:28:02.523332       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-07 20:28:02.523308866 +0000 UTC m=+19.524687513"
I0907 20:28:02.523498       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-969cf87c4-z5zrg" podUID=429be184-7b06-4125-9cea-f4e6bd3c834a
I0907 20:28:02.523530       1 replica_set.go:443] Pod calico-kube-controllers-969cf87c4-z5zrg updated, objectMeta {Name:calico-kube-controllers-969cf87c4-z5zrg GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:429be184-7b06-4125-9cea-f4e6bd3c834a ResourceVersion:519 Generation:0 CreationTimestamp:2022-09-07 20:28:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:73c80745-50e6-48d8-a730-15b4da2c0669 Controller:0xc000c95147 BlockOwnerDeletion:0xc000c95148}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:28:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"73c80745-50e6-48d8-a730-15b4da2c0669\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-969cf87c4-z5zrg GenerateName:calico-kube-controllers-969cf87c4- Namespace:kube-system SelfLink: UID:429be184-7b06-4125-9cea-f4e6bd3c834a ResourceVersion:522 Generation:0 CreationTimestamp:2022-09-07 20:28:02 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:969cf87c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-969cf87c4 UID:73c80745-50e6-48d8-a730-15b4da2c0669 Controller:0xc0004fe4b0 BlockOwnerDeletion:0xc0004fe4b1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:28:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"73c80745-50e6-48d8-a730-15b4da2c0669\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:28:02 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0907 20:28:02.523759       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-969cf87c4-z5zrg"
I0907 20:28:02.523776       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-969cf87c4-z5zrg, PodDisruptionBudget controller will avoid syncing.
I0907 20:28:02.523783       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-969cf87c4-z5zrg"
... skipping 423 lines ...
I0907 20:28:34.114922       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (95.595µs)
I0907 20:28:34.115119       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-p9jqp"
I0907 20:28:34.115146       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-p9jqp, PodDisruptionBudget controller will avoid syncing.
I0907 20:28:34.115154       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-p9jqp"
I0907 20:28:35.410767       1 gc_controller.go:161] GC'ing orphaned
I0907 20:28:35.410796       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:28:35.417951       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-l75yso-control-plane-9pv9f transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:28:13 +0000 UTC,LastTransitionTime:2022-09-07 20:27:30 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:28:33 +0000 UTC,LastTransitionTime:2022-09-07 20:28:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:28:35.418019       1 node_lifecycle_controller.go:1047] Node capz-l75yso-control-plane-9pv9f ReadyCondition updated. Updating timestamp.
I0907 20:28:35.418106       1 node_lifecycle_controller.go:893] Node capz-l75yso-control-plane-9pv9f is healthy again, removing all taints
I0907 20:28:35.418139       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0907 20:28:35.563887       1 replica_set.go:443] Pod coredns-78fcd69978-qnn9p updated, objectMeta {Name:coredns-78fcd69978-qnn9p GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:2ff27a57-11ea-479c-95f1-e3c144d07606 ResourceVersion:658 Generation:0 CreationTimestamp:2022-09-07 20:27:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[cni.projectcalico.org/containerID:ccb8b6e993f5c45dde54477ddd3bf4dada679b5cb399150cd260e57d24fc620b cni.projectcalico.org/podIP:192.168.13.1/32 cni.projectcalico.org/podIPs:192.168.13.1/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:41fa7f63-f2eb-45b5-8ec9-0af42242df69 Controller:0xc0027c0b67 BlockOwnerDeletion:0xc0027c0b68}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:27:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41fa7f63-f2eb-45b5-8ec9-0af42242df69\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:27:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 20:28:33 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:28:33 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]} -> {Name:coredns-78fcd69978-qnn9p GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:2ff27a57-11ea-479c-95f1-e3c144d07606 ResourceVersion:674 Generation:0 CreationTimestamp:2022-09-07 20:27:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[cni.projectcalico.org/containerID:ccb8b6e993f5c45dde54477ddd3bf4dada679b5cb399150cd260e57d24fc620b cni.projectcalico.org/podIP:192.168.13.1/32 cni.projectcalico.org/podIPs:192.168.13.1/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:41fa7f63-f2eb-45b5-8ec9-0af42242df69 Controller:0xc002635347 BlockOwnerDeletion:0xc002635348}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-07 20:27:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"41fa7f63-f2eb-45b5-8ec9-0af42242df69\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-07 20:27:57 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-07 20:28:33 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-07 20:28:35 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.13.1\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0907 20:28:35.564072       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc0be5d9371577f1d, ext:14829196404, loc:(*time.Location)(0x751a1a0)}}
I0907 20:28:35.564212       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (145.898µs)
... skipping 205 lines ...
I0907 20:29:58.428960       1 certificate_controller.go:173] Finished syncing certificate request "csr-87ld9" (800ns)
I0907 20:29:58.428970       1 certificate_controller.go:87] Updating certificate request csr-87ld9
I0907 20:29:58.428979       1 certificate_controller.go:173] Finished syncing certificate request "csr-87ld9" (900ns)
I0907 20:29:58.429172       1 certificate_controller.go:87] Updating certificate request csr-87ld9
I0907 20:29:58.429310       1 certificate_controller.go:173] Finished syncing certificate request "csr-87ld9" (1.1µs)
I0907 20:30:00.918421       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
W0907 20:30:00.918454       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-l75yso-mp-0000000" does not exist
I0907 20:30:00.918505       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l75yso-mp-0000000}
I0907 20:30:00.918572       1 taint_manager.go:440] "Updating known taints on node" node="capz-l75yso-mp-0000000" taints=[]
I0907 20:30:00.919819       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5d96634e7e57, ext:26593725258, loc:(*time.Location)(0x751a1a0)}}
I0907 20:30:00.921508       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be5db236ed0c25, ext:137922883452, loc:(*time.Location)(0x751a1a0)}}
I0907 20:30:00.921844       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-l75yso-mp-0000000], creating 1
I0907 20:30:00.921426       1 controller.go:693] Ignoring node capz-l75yso-mp-0000000 with Ready condition status False
... skipping 118 lines ...
I0907 20:30:03.536142       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be5db2dff4d79e, ext:140537518225, loc:(*time.Location)(0x751a1a0)}}
I0907 20:30:03.536175       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0907 20:30:03.536211       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0907 20:30:03.536235       1 daemon_controller.go:1102] Updating daemon set status
I0907 20:30:03.536292       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.011389ms)
I0907 20:30:03.604680       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000001"
W0907 20:30:03.604722       1 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-l75yso-mp-0000001" does not exist
I0907 20:30:03.604798       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l75yso-mp-0000001}
I0907 20:30:03.604817       1 taint_manager.go:440] "Updating known taints on node" node="capz-l75yso-mp-0000001" taints=[]
I0907 20:30:03.606716       1 controller.go:693] Ignoring node capz-l75yso-mp-0000000 with Ready condition status False
I0907 20:30:03.606737       1 controller.go:693] Ignoring node capz-l75yso-mp-0000001 with Ready condition status False
I0907 20:30:03.606746       1 controller.go:272] Triggering nodeSync
I0907 20:30:03.606754       1 controller.go:291] nodeSync has been triggered
... skipping 391 lines ...
I0907 20:30:34.306265       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0907 20:30:34.306373       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0907 20:30:34.306411       1 daemon_controller.go:1102] Updating daemon set status
I0907 20:30:34.306507       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.170688ms)
I0907 20:30:35.413551       1 gc_controller.go:161] GC'ing orphaned
I0907 20:30:35.413577       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:30:35.434931       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-l75yso-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:30:10 +0000 UTC,LastTransitionTime:2022-09-07 20:30:00 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:30:30 +0000 UTC,LastTransitionTime:2022-09-07 20:30:30 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:30:35.435006       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 20:30:35.449638       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
I0907 20:30:35.449921       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l75yso-mp-0000000}
I0907 20:30:35.450291       1 taint_manager.go:440] "Updating known taints on node" node="capz-l75yso-mp-0000000" taints=[]
I0907 20:30:35.450332       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-l75yso-mp-0000000"
I0907 20:30:35.450527       1 node_lifecycle_controller.go:893] Node capz-l75yso-mp-0000000 is healthy again, removing all taints
I0907 20:30:35.450591       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-l75yso-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-07 20:30:13 +0000 UTC,LastTransitionTime:2022-09-07 20:30:03 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-07 20:30:33 +0000 UTC,LastTransitionTime:2022-09-07 20:30:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0907 20:30:35.450650       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:30:35.480998       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000001"
I0907 20:30:35.481385       1 node_lifecycle_controller.go:893] Node capz-l75yso-mp-0000001 is healthy again, removing all taints
I0907 20:30:35.481811       1 node_lifecycle_controller.go:1214] Controller detected that zone westus2::1 is now in state Normal.
I0907 20:30:35.481829       1 node_lifecycle_controller.go:1214] Controller detected that zone westus2::0 is now in state Normal.
I0907 20:30:35.481471       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-l75yso-mp-0000001}
... skipping 414 lines ...
I0907 20:33:27.654143       1 pv_controller.go:1108] reclaimVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: policy is Delete
I0907 20:33:27.654274       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]]
I0907 20:33:27.654429       1 pv_controller.go:1763] operation "delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]" is already running, skipping
I0907 20:33:27.653665       1 pv_protection_controller.go:205] Got event on PV pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3
I0907 20:33:27.668264       1 pv_controller.go:1340] isVolumeReleased[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is released
I0907 20:33:27.668452       1 pv_controller.go:1404] doDeleteVolume [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]
I0907 20:33:27.692450       1 pv_controller.go:1259] deletion of volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:33:27.692473       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: set phase Failed
I0907 20:33:27.692483       1 pv_controller.go:858] updating PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: set phase Failed
I0907 20:33:27.696088       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" with version 1274
I0907 20:33:27.696195       1 pv_controller.go:879] volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" entered phase "Failed"
I0907 20:33:27.696213       1 pv_controller.go:901] volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
E0907 20:33:27.696284       1 goroutinemap.go:150] Operation for "delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]" failed. No retries permitted until 2022-09-07 20:33:28.19626512 +0000 UTC m=+345.197643767 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:33:27.696353       1 event.go:291] "Event occurred" object="pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted"
I0907 20:33:27.696114       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" with version 1274
I0907 20:33:27.696390       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase: Failed, bound to: "azuredisk-8081/pvc-8z4xd (uid: a8b012b3-5727-4ffb-a18a-e157e0f6c4e3)", boundByController: true
I0907 20:33:27.696446       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is bound to claim azuredisk-8081/pvc-8z4xd
I0907 20:33:27.696470       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: claim azuredisk-8081/pvc-8z4xd not found
I0907 20:33:27.696503       1 pv_controller.go:1108] reclaimVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: policy is Delete
I0907 20:33:27.696525       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]]
I0907 20:33:27.696537       1 pv_controller.go:1765] operation "delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]" postponed due to exponential backoff
I0907 20:33:27.696143       1 pv_protection_controller.go:205] Got event on PV pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3
... skipping 11 lines ...
I0907 20:33:35.420758       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:33:35.508040       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:33:36.343319       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 0 items received
I0907 20:33:40.352522       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:33:40.370146       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:33:40.370198       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" with version 1274
I0907 20:33:40.370239       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase: Failed, bound to: "azuredisk-8081/pvc-8z4xd (uid: a8b012b3-5727-4ffb-a18a-e157e0f6c4e3)", boundByController: true
I0907 20:33:40.370271       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is bound to claim azuredisk-8081/pvc-8z4xd
I0907 20:33:40.370304       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: claim azuredisk-8081/pvc-8z4xd not found
I0907 20:33:40.370315       1 pv_controller.go:1108] reclaimVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: policy is Delete
I0907 20:33:40.370331       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]]
I0907 20:33:40.370371       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3] started
I0907 20:33:40.375293       1 pv_controller.go:1340] isVolumeReleased[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is released
I0907 20:33:40.375309       1 pv_controller.go:1404] doDeleteVolume [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]
I0907 20:33:40.423507       1 pv_controller.go:1259] deletion of volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:33:40.423534       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: set phase Failed
I0907 20:33:40.423543       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase Failed already set
E0907 20:33:40.423736       1 goroutinemap.go:150] Operation for "delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]" failed. No retries permitted until 2022-09-07 20:33:41.423696176 +0000 UTC m=+358.425074823 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:33:40.634077       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Secret total 56 items received
I0907 20:33:41.678169       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.8µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:60994" resp=200
I0907 20:33:41.889626       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3 from node "capz-l75yso-mp-0000001"
I0907 20:33:41.889680       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3"
I0907 20:33:41.889841       1 azure_controller_vmss.go:175] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3)
I0907 20:33:42.318829       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 12 items received
... skipping 4 lines ...
I0907 20:33:51.678222       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="69.299µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45184" resp=200
I0907 20:33:52.481524       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:33:54.356388       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 0 items received
I0907 20:33:55.353609       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:33:55.371133       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:33:55.371265       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" with version 1274
I0907 20:33:55.371360       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase: Failed, bound to: "azuredisk-8081/pvc-8z4xd (uid: a8b012b3-5727-4ffb-a18a-e157e0f6c4e3)", boundByController: true
I0907 20:33:55.371429       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is bound to claim azuredisk-8081/pvc-8z4xd
I0907 20:33:55.371507       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: claim azuredisk-8081/pvc-8z4xd not found
I0907 20:33:55.371542       1 pv_controller.go:1108] reclaimVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: policy is Delete
I0907 20:33:55.371602       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]]
I0907 20:33:55.371638       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3] started
I0907 20:33:55.384465       1 pv_controller.go:1340] isVolumeReleased[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is released
I0907 20:33:55.384481       1 pv_controller.go:1404] doDeleteVolume [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]
I0907 20:33:55.384641       1 pv_controller.go:1259] deletion of volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) since it's in attaching or detaching state
I0907 20:33:55.384657       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: set phase Failed
I0907 20:33:55.384668       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase Failed already set
E0907 20:33:55.384761       1 goroutinemap.go:150] Operation for "delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]" failed. No retries permitted until 2022-09-07 20:33:57.384741683 +0000 UTC m=+374.386120330 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) since it's in attaching or detaching state
I0907 20:33:55.418996       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:33:55.421100       1 gc_controller.go:161] GC'ing orphaned
I0907 20:33:55.421119       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:33:55.953423       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:33:56.657500       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PodSecurityPolicy total 0 items received
I0907 20:33:57.150441       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3) returned with <nil>
... skipping 7 lines ...
I0907 20:34:03.538102       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-control-plane-9pv9f"
I0907 20:34:05.511528       1 node_lifecycle_controller.go:1047] Node capz-l75yso-control-plane-9pv9f ReadyCondition updated. Updating timestamp.
I0907 20:34:09.355758       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ReplicationController total 0 items received
I0907 20:34:10.354610       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:34:10.372100       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:34:10.372159       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" with version 1274
I0907 20:34:10.372195       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase: Failed, bound to: "azuredisk-8081/pvc-8z4xd (uid: a8b012b3-5727-4ffb-a18a-e157e0f6c4e3)", boundByController: true
I0907 20:34:10.372226       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is bound to claim azuredisk-8081/pvc-8z4xd
I0907 20:34:10.372244       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: claim azuredisk-8081/pvc-8z4xd not found
I0907 20:34:10.372252       1 pv_controller.go:1108] reclaimVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: policy is Delete
I0907 20:34:10.372277       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]]
I0907 20:34:10.372304       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3] started
I0907 20:34:10.376388       1 pv_controller.go:1340] isVolumeReleased[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is released
... skipping 3 lines ...
I0907 20:34:15.422026       1 gc_controller.go:161] GC'ing orphaned
I0907 20:34:15.422051       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:34:15.601930       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3
I0907 20:34:15.601964       1 pv_controller.go:1435] volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" deleted
I0907 20:34:15.601976       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: success
I0907 20:34:15.606795       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3" with version 1346
I0907 20:34:15.606841       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: phase: Failed, bound to: "azuredisk-8081/pvc-8z4xd (uid: a8b012b3-5727-4ffb-a18a-e157e0f6c4e3)", boundByController: true
I0907 20:34:15.606867       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: volume is bound to claim azuredisk-8081/pvc-8z4xd
I0907 20:34:15.606889       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: claim azuredisk-8081/pvc-8z4xd not found
I0907 20:34:15.606901       1 pv_controller.go:1108] reclaimVolume[pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3]: policy is Delete
I0907 20:34:15.606916       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3[63fd2f4f-5d66-41af-9c4f-19b3a52e147c]]
I0907 20:34:15.606942       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3] started
I0907 20:34:15.607113       1 pv_protection_controller.go:205] Got event on PV pvc-a8b012b3-5727-4ffb-a18a-e157e0f6c4e3
... skipping 132 lines ...
I0907 20:34:24.458904       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-fdfqb"
I0907 20:34:24.459135       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-fdfqb, PodDisruptionBudget controller will avoid syncing.
I0907 20:34:24.459248       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-fdfqb"
I0907 20:34:24.488011       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" lun 0 to node "capz-l75yso-mp-0000000".
I0907 20:34:24.488246       1 azure_controller_vmss.go:101] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - attach disk(capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) with DiskEncryptionSetID()
I0907 20:34:24.491570       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2540, name default-token-zm9bj, uid 400237b4-12b3-4eef-a0b6-693df9a5ebcb, event type delete
E0907 20:34:24.502352       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2540/default: secrets "default-token-bhsrv" is forbidden: unable to create new content in namespace azuredisk-2540 because it is being terminated
I0907 20:34:24.537805       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2540/default), service account deleted, removing tokens
I0907 20:34:24.537989       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2540, name default, uid 6dcf1699-be8a-4a2a-a624-221eefca0c87, event type delete
I0907 20:34:24.538011       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (2.199µs)
I0907 20:34:24.569329       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-2540, estimate: 0, errors: <nil>
I0907 20:34:24.569750       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2540" (3.2µs)
I0907 20:34:24.576967       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-2540" (164.830092ms)
... skipping 22 lines ...
I0907 20:34:25.373181       1 pv_controller.go:1038] volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" bound to claim "azuredisk-5466/pvc-dhlxn"
I0907 20:34:25.373221       1 pv_controller.go:1039] volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" status after binding: phase: Bound, bound to: "azuredisk-5466/pvc-dhlxn (uid: f7a4a805-195c-4bbc-87e8-aaa827c5142a)", boundByController: true
I0907 20:34:25.373254       1 pv_controller.go:1040] claim "azuredisk-5466/pvc-dhlxn" status after binding: phase: Bound, bound to: "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a", bindCompleted: true, boundByController: true
I0907 20:34:25.420105       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:34:25.477049       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4728
I0907 20:34:25.514570       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4728, name default-token-trdlz, uid 692efad6-3119-42fb-b2d3-55613daf447d, event type delete
E0907 20:34:25.529922       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4728/default: secrets "default-token-5z8jn" is forbidden: unable to create new content in namespace azuredisk-4728 because it is being terminated
I0907 20:34:25.567140       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4728, name kube-root-ca.crt, uid 12d8de7e-9b43-4a3b-93d0-86ded322973b, event type delete
I0907 20:34:25.568741       1 publisher.go:186] Finished syncing namespace "azuredisk-4728" (1.540787ms)
I0907 20:34:25.598247       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4728/default), service account deleted, removing tokens
I0907 20:34:25.598474       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.7µs)
I0907 20:34:25.598588       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4728, name default, uid 702c22e4-6e38-42a2-8c3b-4c9ebe33240d, event type delete
I0907 20:34:25.607888       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4728" (2.399µs)
... skipping 354 lines ...
I0907 20:36:29.152365       1 pv_controller.go:1108] reclaimVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: policy is Delete
I0907 20:36:29.152391       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]]
I0907 20:36:29.152402       1 pv_controller.go:1763] operation "delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]" is already running, skipping
I0907 20:36:29.152443       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a] started
I0907 20:36:29.163378       1 pv_controller.go:1340] isVolumeReleased[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is released
I0907 20:36:29.163526       1 pv_controller.go:1404] doDeleteVolume [pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]
I0907 20:36:29.216103       1 pv_controller.go:1259] deletion of volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:36:29.216182       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: set phase Failed
I0907 20:36:29.216209       1 pv_controller.go:858] updating PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: set phase Failed
I0907 20:36:29.219218       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" with version 1610
I0907 20:36:29.219258       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: phase: Failed, bound to: "azuredisk-5466/pvc-dhlxn (uid: f7a4a805-195c-4bbc-87e8-aaa827c5142a)", boundByController: true
I0907 20:36:29.219282       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is bound to claim azuredisk-5466/pvc-dhlxn
I0907 20:36:29.219304       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: claim azuredisk-5466/pvc-dhlxn not found
I0907 20:36:29.219316       1 pv_controller.go:1108] reclaimVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: policy is Delete
I0907 20:36:29.219331       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]]
I0907 20:36:29.219342       1 pv_controller.go:1763] operation "delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]" is already running, skipping
I0907 20:36:29.219357       1 pv_protection_controller.go:205] Got event on PV pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a
I0907 20:36:29.220317       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" with version 1610
I0907 20:36:29.220485       1 pv_controller.go:879] volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" entered phase "Failed"
I0907 20:36:29.220535       1 pv_controller.go:901] volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
E0907 20:36:29.220634       1 goroutinemap.go:150] Operation for "delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]" failed. No retries permitted until 2022-09-07 20:36:29.720588874 +0000 UTC m=+526.721967521 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:36:29.220951       1 event.go:291] "Event occurred" object="pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted"
I0907 20:36:31.316315       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
I0907 20:36:31.316347       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a to the node "capz-l75yso-mp-0000000" mounted false
I0907 20:36:31.333785       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
I0907 20:36:31.333810       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a to the node "capz-l75yso-mp-0000000" mounted false
I0907 20:36:31.335523       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-l75yso-mp-0000000" succeeded. VolumesAttached: []
... skipping 8 lines ...
I0907 20:36:35.424877       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:36:35.537458       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 20:36:39.350668       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0907 20:36:40.362573       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:36:40.379873       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:36:40.380009       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" with version 1610
I0907 20:36:40.380050       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: phase: Failed, bound to: "azuredisk-5466/pvc-dhlxn (uid: f7a4a805-195c-4bbc-87e8-aaa827c5142a)", boundByController: true
I0907 20:36:40.380101       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is bound to claim azuredisk-5466/pvc-dhlxn
I0907 20:36:40.380123       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: claim azuredisk-5466/pvc-dhlxn not found
I0907 20:36:40.380131       1 pv_controller.go:1108] reclaimVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: policy is Delete
I0907 20:36:40.380147       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]]
I0907 20:36:40.380205       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a] started
I0907 20:36:40.382897       1 pv_controller.go:1340] isVolumeReleased[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is released
I0907 20:36:40.382915       1 pv_controller.go:1404] doDeleteVolume [pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]
I0907 20:36:40.383082       1 pv_controller.go:1259] deletion of volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) since it's in attaching or detaching state
I0907 20:36:40.383098       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: set phase Failed
I0907 20:36:40.383120       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: phase Failed already set
E0907 20:36:40.383172       1 goroutinemap.go:150] Operation for "delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]" failed. No retries permitted until 2022-09-07 20:36:41.38313119 +0000 UTC m=+538.384509837 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) since it's in attaching or detaching state
I0907 20:36:41.678312       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="72.599µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34020" resp=200
I0907 20:36:46.319361       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 0 items received
I0907 20:36:46.597252       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) returned with <nil>
I0907 20:36:46.597294       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a) succeeded
I0907 20:36:46.597304       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a was detached from node:capz-l75yso-mp-0000000
I0907 20:36:46.597326       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a") on node "capz-l75yso-mp-0000000" 
I0907 20:36:51.677904       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="70.4µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46142" resp=200
I0907 20:36:55.363028       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:36:55.380278       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:36:55.380342       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" with version 1610
I0907 20:36:55.380393       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: phase: Failed, bound to: "azuredisk-5466/pvc-dhlxn (uid: f7a4a805-195c-4bbc-87e8-aaa827c5142a)", boundByController: true
I0907 20:36:55.380423       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is bound to claim azuredisk-5466/pvc-dhlxn
I0907 20:36:55.380440       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: claim azuredisk-5466/pvc-dhlxn not found
I0907 20:36:55.380448       1 pv_controller.go:1108] reclaimVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: policy is Delete
I0907 20:36:55.380461       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]]
I0907 20:36:55.380496       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a] started
I0907 20:36:55.385670       1 pv_controller.go:1340] isVolumeReleased[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is released
... skipping 4 lines ...
I0907 20:36:56.043532       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:36:58.345390       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Event total 156 items received
I0907 20:37:00.627509       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a
I0907 20:37:00.627536       1 pv_controller.go:1435] volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" deleted
I0907 20:37:00.627568       1 pv_controller.go:1283] deleteVolumeOperation [pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: success
I0907 20:37:00.631525       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a" with version 1658
I0907 20:37:00.631587       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: phase: Failed, bound to: "azuredisk-5466/pvc-dhlxn (uid: f7a4a805-195c-4bbc-87e8-aaa827c5142a)", boundByController: true
I0907 20:37:00.631676       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: volume is bound to claim azuredisk-5466/pvc-dhlxn
I0907 20:37:00.631736       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: claim azuredisk-5466/pvc-dhlxn not found
I0907 20:37:00.631746       1 pv_controller.go:1108] reclaimVolume[pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a]: policy is Delete
I0907 20:37:00.631761       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]]
I0907 20:37:00.631798       1 pv_controller.go:1763] operation "delete-pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a[7dee1dd9-781b-4706-a2ea-f84be2d70b32]" is already running, skipping
I0907 20:37:00.631819       1 pv_protection_controller.go:205] Got event on PV pvc-f7a4a805-195c-4bbc-87e8-aaa827c5142a
... skipping 277 lines ...
I0907 20:37:26.575197       1 pv_controller.go:1108] reclaimVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: policy is Delete
I0907 20:37:26.575287       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]]
I0907 20:37:26.575381       1 pv_controller.go:1763] operation "delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]" is already running, skipping
I0907 20:37:26.574849       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7] started
I0907 20:37:26.576971       1 pv_controller.go:1340] isVolumeReleased[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is released
I0907 20:37:26.576987       1 pv_controller.go:1404] doDeleteVolume [pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]
I0907 20:37:26.596430       1 pv_controller.go:1259] deletion of volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:37:26.596447       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: set phase Failed
I0907 20:37:26.596475       1 pv_controller.go:858] updating PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: set phase Failed
I0907 20:37:26.598886       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" with version 1757
I0907 20:37:26.598922       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: phase: Failed, bound to: "azuredisk-2790/pvc-dz5cc (uid: c51f32ca-cc78-41ff-b460-5bd9d0615fa7)", boundByController: true
I0907 20:37:26.598944       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is bound to claim azuredisk-2790/pvc-dz5cc
I0907 20:37:26.599701       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" with version 1757
I0907 20:37:26.599737       1 pv_controller.go:879] volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" entered phase "Failed"
I0907 20:37:26.599746       1 pv_controller.go:901] volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
E0907 20:37:26.599781       1 goroutinemap.go:150] Operation for "delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]" failed. No retries permitted until 2022-09-07 20:37:27.099763861 +0000 UTC m=+584.101142408 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:37:26.600013       1 event.go:291] "Event occurred" object="pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted"
I0907 20:37:26.600019       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: claim azuredisk-2790/pvc-dz5cc not found
I0907 20:37:26.600229       1 pv_controller.go:1108] reclaimVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: policy is Delete
I0907 20:37:26.600336       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]]
I0907 20:37:26.600462       1 pv_controller.go:1765] operation "delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]" postponed due to exponential backoff
I0907 20:37:26.600030       1 pv_protection_controller.go:205] Got event on PV pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7
... skipping 15 lines ...
I0907 20:37:35.427332       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:37:35.546735       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:37:38.350955       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:37:40.365060       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:37:40.383890       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:37:40.383958       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" with version 1757
I0907 20:37:40.383998       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: phase: Failed, bound to: "azuredisk-2790/pvc-dz5cc (uid: c51f32ca-cc78-41ff-b460-5bd9d0615fa7)", boundByController: true
I0907 20:37:40.384035       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is bound to claim azuredisk-2790/pvc-dz5cc
I0907 20:37:40.384058       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: claim azuredisk-2790/pvc-dz5cc not found
I0907 20:37:40.384068       1 pv_controller.go:1108] reclaimVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: policy is Delete
I0907 20:37:40.384086       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]]
I0907 20:37:40.384116       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7] started
I0907 20:37:40.393152       1 pv_controller.go:1340] isVolumeReleased[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is released
I0907 20:37:40.393170       1 pv_controller.go:1404] doDeleteVolume [pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]
I0907 20:37:40.393203       1 pv_controller.go:1259] deletion of volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) since it's in attaching or detaching state
I0907 20:37:40.393220       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: set phase Failed
I0907 20:37:40.393230       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: phase Failed already set
E0907 20:37:40.393259       1 goroutinemap.go:150] Operation for "delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]" failed. No retries permitted until 2022-09-07 20:37:41.393236567 +0000 UTC m=+598.394615214 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) since it's in attaching or detaching state
I0907 20:37:41.677741       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="78.899µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45866" resp=200
I0907 20:37:44.320750       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 17 items received
I0907 20:37:44.344823       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 22 items received
I0907 20:37:47.556719       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0907 20:37:49.330744       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) returned with <nil>
I0907 20:37:49.330790       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7) succeeded
... skipping 3 lines ...
I0907 20:37:51.677970       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.9µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:35116" resp=200
I0907 20:37:55.340218       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:37:55.340218       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:37:55.365860       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:37:55.384997       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:37:55.385053       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" with version 1757
I0907 20:37:55.385090       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: phase: Failed, bound to: "azuredisk-2790/pvc-dz5cc (uid: c51f32ca-cc78-41ff-b460-5bd9d0615fa7)", boundByController: true
I0907 20:37:55.385128       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is bound to claim azuredisk-2790/pvc-dz5cc
I0907 20:37:55.385150       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: claim azuredisk-2790/pvc-dz5cc not found
I0907 20:37:55.385160       1 pv_controller.go:1108] reclaimVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: policy is Delete
I0907 20:37:55.385176       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]]
I0907 20:37:55.385205       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7] started
I0907 20:37:55.392476       1 pv_controller.go:1340] isVolumeReleased[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is released
... skipping 11 lines ...
I0907 20:37:58.001024       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-07 20:37:58.000980189 +0000 UTC m=+615.002358836"
I0907 20:37:58.001677       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="684.496µs"
I0907 20:38:00.581357       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7
I0907 20:38:00.581386       1 pv_controller.go:1435] volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" deleted
I0907 20:38:00.581396       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: success
I0907 20:38:00.591772       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7" with version 1807
I0907 20:38:00.592337       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: phase: Failed, bound to: "azuredisk-2790/pvc-dz5cc (uid: c51f32ca-cc78-41ff-b460-5bd9d0615fa7)", boundByController: true
I0907 20:38:00.592487       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: volume is bound to claim azuredisk-2790/pvc-dz5cc
I0907 20:38:00.592555       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: claim azuredisk-2790/pvc-dz5cc not found
I0907 20:38:00.592581       1 pv_controller.go:1108] reclaimVolume[pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7]: policy is Delete
I0907 20:38:00.592627       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]]
I0907 20:38:00.592670       1 pv_controller.go:1763] operation "delete-pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7[daeb2eb4-417e-4624-89e7-217a3e22eefb]" is already running, skipping
I0907 20:38:00.592090       1 pv_protection_controller.go:205] Got event on PV pvc-c51f32ca-cc78-41ff-b460-5bd9d0615fa7
... skipping 107 lines ...
I0907 20:38:06.275223       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-l75yso-mp-0000001, refreshing the cache
I0907 20:38:06.338109       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" to node "capz-l75yso-mp-0000001".
I0907 20:38:06.358169       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" lun 0 to node "capz-l75yso-mp-0000001".
I0907 20:38:06.358209       1 azure_controller_vmss.go:101] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - attach disk(capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) with DiskEncryptionSetID()
I0907 20:38:07.248914       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-2790
I0907 20:38:07.269541       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2790, name default-token-wwk9j, uid f76c0275-029b-4cee-85be-1c52b05107f9, event type delete
E0907 20:38:07.286439       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2790/default: secrets "default-token-6cclv" is forbidden: unable to create new content in namespace azuredisk-2790 because it is being terminated
I0907 20:38:07.309737       1 tokens_controller.go:252] syncServiceAccount(azuredisk-2790/default), service account deleted, removing tokens
I0907 20:38:07.309778       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-2790, name default, uid cde30e15-197a-4134-886f-287d8d7ad30d, event type delete
I0907 20:38:07.310532       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-2790" (2.4µs)
I0907 20:38:07.317866       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-9nf2s.1712ae74b125efa5, uid 6d1d855d-513d-44c0-b4f3-72c65b166a58, event type delete
I0907 20:38:07.321065       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-9nf2s.1712ae77187ec411, uid 1f74b5ae-e7be-4a94-9cc0-5d90bde924dc, event type delete
I0907 20:38:07.323870       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2790, name azuredisk-volume-tester-9nf2s.1712ae779854ebbe, uid a1c47afd-4c8b-4e84-b31f-a4990dfdaa90, event type delete
... skipping 137 lines ...
I0907 20:38:24.179206       1 pv_controller.go:1108] reclaimVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: policy is Delete
I0907 20:38:24.179232       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]]
I0907 20:38:24.179240       1 pv_controller.go:1763] operation "delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]" is already running, skipping
I0907 20:38:24.179265       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65] started
I0907 20:38:24.182479       1 pv_controller.go:1340] isVolumeReleased[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is released
I0907 20:38:24.182494       1 pv_controller.go:1404] doDeleteVolume [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]
I0907 20:38:24.182541       1 pv_controller.go:1259] deletion of volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
I0907 20:38:24.182552       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: set phase Failed
I0907 20:38:24.182561       1 pv_controller.go:858] updating PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: set phase Failed
I0907 20:38:24.184885       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" with version 1901
I0907 20:38:24.184933       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase: Failed, bound to: "azuredisk-5356/pvc-zs5g6 (uid: ce9d565e-c04d-4265-b6c9-aef864fe9a65)", boundByController: true
I0907 20:38:24.184988       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is bound to claim azuredisk-5356/pvc-zs5g6
I0907 20:38:24.185018       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: claim azuredisk-5356/pvc-zs5g6 not found
I0907 20:38:24.185028       1 pv_controller.go:1108] reclaimVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: policy is Delete
I0907 20:38:24.185068       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]]
I0907 20:38:24.185087       1 pv_controller.go:1763] operation "delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]" is already running, skipping
I0907 20:38:24.185106       1 pv_protection_controller.go:205] Got event on PV pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65
I0907 20:38:24.185992       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" with version 1901
I0907 20:38:24.186018       1 pv_controller.go:879] volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" entered phase "Failed"
I0907 20:38:24.186027       1 pv_controller.go:901] volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
E0907 20:38:24.186067       1 goroutinemap.go:150] Operation for "delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]" failed. No retries permitted until 2022-09-07 20:38:24.686049174 +0000 UTC m=+641.687427721 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
I0907 20:38:24.186254       1 event.go:291] "Event occurred" object="pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state"
I0907 20:38:25.367471       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:25.386609       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:25.386665       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" with version 1901
I0907 20:38:25.386701       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase: Failed, bound to: "azuredisk-5356/pvc-zs5g6 (uid: ce9d565e-c04d-4265-b6c9-aef864fe9a65)", boundByController: true
I0907 20:38:25.386731       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is bound to claim azuredisk-5356/pvc-zs5g6
I0907 20:38:25.386754       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: claim azuredisk-5356/pvc-zs5g6 not found
I0907 20:38:25.386763       1 pv_controller.go:1108] reclaimVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: policy is Delete
I0907 20:38:25.386779       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]]
I0907 20:38:25.386813       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65] started
I0907 20:38:25.391493       1 pv_controller.go:1340] isVolumeReleased[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is released
I0907 20:38:25.391511       1 pv_controller.go:1404] doDeleteVolume [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]
I0907 20:38:25.391540       1 pv_controller.go:1259] deletion of volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
I0907 20:38:25.391553       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: set phase Failed
I0907 20:38:25.391563       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase Failed already set
E0907 20:38:25.391589       1 goroutinemap.go:150] Operation for "delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]" failed. No retries permitted until 2022-09-07 20:38:26.391571503 +0000 UTC m=+643.392950050 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
I0907 20:38:25.425028       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:25.551772       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:38:26.080889       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:38:31.678076       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="56.899µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:60990" resp=200
I0907 20:38:35.429328       1 gc_controller.go:161] GC'ing orphaned
I0907 20:38:35.429355       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:38:37.511027       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-af3dvx" (12.7µs)
I0907 20:38:39.866329       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0907 20:38:40.367895       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:40.387037       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:40.387138       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" with version 1901
I0907 20:38:40.387211       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase: Failed, bound to: "azuredisk-5356/pvc-zs5g6 (uid: ce9d565e-c04d-4265-b6c9-aef864fe9a65)", boundByController: true
I0907 20:38:40.387274       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is bound to claim azuredisk-5356/pvc-zs5g6
I0907 20:38:40.387311       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: claim azuredisk-5356/pvc-zs5g6 not found
I0907 20:38:40.387319       1 pv_controller.go:1108] reclaimVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: policy is Delete
I0907 20:38:40.387362       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]]
I0907 20:38:40.387416       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65] started
I0907 20:38:40.391040       1 pv_controller.go:1340] isVolumeReleased[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is released
I0907 20:38:40.391128       1 pv_controller.go:1404] doDeleteVolume [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]
I0907 20:38:40.391213       1 pv_controller.go:1259] deletion of volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
I0907 20:38:40.391229       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: set phase Failed
I0907 20:38:40.391239       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase Failed already set
E0907 20:38:40.391267       1 goroutinemap.go:150] Operation for "delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]" failed. No retries permitted until 2022-09-07 20:38:42.391247692 +0000 UTC m=+659.392626339 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) since it's in attaching or detaching state
I0907 20:38:41.677895       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="82.099µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54584" resp=200
I0907 20:38:44.371024       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) returned with <nil>
I0907 20:38:44.371070       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65) succeeded
I0907 20:38:44.371081       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65 was detached from node:capz-l75yso-mp-0000001
I0907 20:38:44.371103       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65") on node "capz-l75yso-mp-0000001" 
I0907 20:38:51.678562       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.999µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51350" resp=200
I0907 20:38:55.368581       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:38:55.387716       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:38:55.387775       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" with version 1901
I0907 20:38:55.387811       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase: Failed, bound to: "azuredisk-5356/pvc-zs5g6 (uid: ce9d565e-c04d-4265-b6c9-aef864fe9a65)", boundByController: true
I0907 20:38:55.387857       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is bound to claim azuredisk-5356/pvc-zs5g6
I0907 20:38:55.387878       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: claim azuredisk-5356/pvc-zs5g6 not found
I0907 20:38:55.387887       1 pv_controller.go:1108] reclaimVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: policy is Delete
I0907 20:38:55.387902       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]]
I0907 20:38:55.387937       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65] started
I0907 20:38:55.395313       1 pv_controller.go:1340] isVolumeReleased[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is released
... skipping 3 lines ...
I0907 20:38:55.430112       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:38:56.091946       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:39:00.541579       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65
I0907 20:39:00.541606       1 pv_controller.go:1435] volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" deleted
I0907 20:39:00.541618       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: success
I0907 20:39:00.552145       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65" with version 1955
I0907 20:39:00.552351       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: phase: Failed, bound to: "azuredisk-5356/pvc-zs5g6 (uid: ce9d565e-c04d-4265-b6c9-aef864fe9a65)", boundByController: true
I0907 20:39:00.552491       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: volume is bound to claim azuredisk-5356/pvc-zs5g6
I0907 20:39:00.552517       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: claim azuredisk-5356/pvc-zs5g6 not found
I0907 20:39:00.552525       1 pv_controller.go:1108] reclaimVolume[pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65]: policy is Delete
I0907 20:39:00.552560       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]]
I0907 20:39:00.552570       1 pv_controller.go:1763] operation "delete-pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65[70494946-d430-4d12-8e04-667b1148c937]" is already running, skipping
I0907 20:39:00.552592       1 pv_protection_controller.go:205] Got event on PV pvc-ce9d565e-c04d-4265-b6c9-aef864fe9a65
... skipping 807 lines ...
I0907 20:40:34.341229       1 pv_controller.go:1108] reclaimVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: policy is Delete
I0907 20:40:34.341241       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]]
I0907 20:40:34.341248       1 pv_controller.go:1763] operation "delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]" is already running, skipping
I0907 20:40:34.341272       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b] started
I0907 20:40:34.343133       1 pv_controller.go:1340] isVolumeReleased[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: volume is released
I0907 20:40:34.343259       1 pv_controller.go:1404] doDeleteVolume [pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]
I0907 20:40:34.366954       1 pv_controller.go:1259] deletion of volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:40:34.366972       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: set phase Failed
I0907 20:40:34.366981       1 pv_controller.go:858] updating PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: set phase Failed
I0907 20:40:34.369699       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" with version 2195
I0907 20:40:34.369771       1 pv_protection_controller.go:205] Got event on PV pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b
I0907 20:40:34.369787       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: phase: Failed, bound to: "azuredisk-5194/pvc-642qf (uid: 0bb19ae2-6a11-4ca9-9950-620030c4d09b)", boundByController: true
I0907 20:40:34.369863       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: volume is bound to claim azuredisk-5194/pvc-642qf
I0907 20:40:34.369902       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: claim azuredisk-5194/pvc-642qf not found
I0907 20:40:34.369940       1 pv_controller.go:1108] reclaimVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: policy is Delete
I0907 20:40:34.369955       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]]
I0907 20:40:34.369963       1 pv_controller.go:1763] operation "delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]" is already running, skipping
I0907 20:40:34.370703       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" with version 2195
I0907 20:40:34.370725       1 pv_controller.go:879] volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" entered phase "Failed"
I0907 20:40:34.370734       1 pv_controller.go:901] volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
E0907 20:40:34.370810       1 goroutinemap.go:150] Operation for "delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]" failed. No retries permitted until 2022-09-07 20:40:34.870791529 +0000 UTC m=+771.872170076 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:40:34.370886       1 event.go:291] "Event occurred" object="pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted"
I0907 20:40:34.374806       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b on node "capz-l75yso-mp-0000001"
I0907 20:40:34.381431       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000001"
I0907 20:40:34.383788       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 to the node "capz-l75yso-mp-0000001" mounted true
I0907 20:40:34.383941       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b to the node "capz-l75yso-mp-0000001" mounted false
I0907 20:40:34.383059       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543\"}]}}" for node "capz-l75yso-mp-0000001" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 0}]
... skipping 7 lines ...
I0907 20:40:35.434079       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:40:35.569872       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:40:36.645518       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:40:40.371415       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:40:40.393685       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:40:40.393842       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" with version 2195
I0907 20:40:40.393943       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: phase: Failed, bound to: "azuredisk-5194/pvc-642qf (uid: 0bb19ae2-6a11-4ca9-9950-620030c4d09b)", boundByController: true
I0907 20:40:40.393847       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-s8k78" with version 1986
I0907 20:40:40.394054       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: phase: Bound, bound to: "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa", bindCompleted: true, boundByController: true
I0907 20:40:40.394147       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" found: phase: Bound, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:40:40.394161       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: claim is already correctly bound
I0907 20:40:40.394196       1 pv_controller.go:1012] binding volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" to claim "azuredisk-5194/pvc-s8k78"
I0907 20:40:40.394208       1 pv_controller.go:910] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: binding to "azuredisk-5194/pvc-s8k78"
... skipping 41 lines ...
I0907 20:40:40.395953       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: all is bound
I0907 20:40:40.395988       1 pv_controller.go:858] updating PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: set phase Bound
I0907 20:40:40.396014       1 pv_controller.go:861] updating PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase Bound already set
I0907 20:40:40.394996       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b] started
I0907 20:40:40.404040       1 pv_controller.go:1340] isVolumeReleased[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: volume is released
I0907 20:40:40.404101       1 pv_controller.go:1404] doDeleteVolume [pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]
I0907 20:40:40.404134       1 pv_controller.go:1259] deletion of volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) since it's in attaching or detaching state
I0907 20:40:40.404145       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: set phase Failed
I0907 20:40:40.404153       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: phase Failed already set
E0907 20:40:40.404182       1 goroutinemap.go:150] Operation for "delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]" failed. No retries permitted until 2022-09-07 20:40:41.40416093 +0000 UTC m=+778.405539577 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) since it's in attaching or detaching state
I0907 20:40:41.678273       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="63.4µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57382" resp=200
I0907 20:40:48.713296       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ValidatingWebhookConfiguration total 0 items received
I0907 20:40:49.760395       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) returned with <nil>
I0907 20:40:49.760436       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b) succeeded
I0907 20:40:49.760445       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b was detached from node:capz-l75yso-mp-0000001
I0907 20:40:49.760590       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b") on node "capz-l75yso-mp-0000001" 
... skipping 30 lines ...
I0907 20:40:55.395908       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is bound to claim azuredisk-5194/pvc-wtxnj
I0907 20:40:55.395924       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: claim azuredisk-5194/pvc-wtxnj found: phase: Bound, bound to: "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543", bindCompleted: true, boundByController: true
I0907 20:40:55.395981       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: all is bound
I0907 20:40:55.396063       1 pv_controller.go:858] updating PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: set phase Bound
I0907 20:40:55.396143       1 pv_controller.go:861] updating PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase Bound already set
I0907 20:40:55.396160       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" with version 2195
I0907 20:40:55.396178       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: phase: Failed, bound to: "azuredisk-5194/pvc-642qf (uid: 0bb19ae2-6a11-4ca9-9950-620030c4d09b)", boundByController: true
I0907 20:40:55.395480       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5194/pvc-wtxnj" with version 2052
I0907 20:40:55.396249       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5194/pvc-wtxnj]: phase: Bound, bound to: "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543", bindCompleted: true, boundByController: true
I0907 20:40:55.396303       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-wtxnj]: volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" found: phase: Bound, bound to: "azuredisk-5194/pvc-wtxnj (uid: ad0ced58-98ed-467f-879b-3aeeb0f24543)", boundByController: true
I0907 20:40:55.396321       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5194/pvc-wtxnj]: claim is already correctly bound
I0907 20:40:55.396341       1 pv_controller.go:1012] binding volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" to claim "azuredisk-5194/pvc-wtxnj"
I0907 20:40:55.396351       1 pv_controller.go:910] updating PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: binding to "azuredisk-5194/pvc-wtxnj"
... skipping 21 lines ...
I0907 20:40:56.386169       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:40:57.727450       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:41:00.574868       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b
I0907 20:41:00.574895       1 pv_controller.go:1435] volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" deleted
I0907 20:41:00.575058       1 pv_controller.go:1283] deleteVolumeOperation [pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: success
I0907 20:41:00.584030       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b" with version 2237
I0907 20:41:00.584067       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: phase: Failed, bound to: "azuredisk-5194/pvc-642qf (uid: 0bb19ae2-6a11-4ca9-9950-620030c4d09b)", boundByController: true
I0907 20:41:00.584115       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: volume is bound to claim azuredisk-5194/pvc-642qf
I0907 20:41:00.584153       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: claim azuredisk-5194/pvc-642qf not found
I0907 20:41:00.584189       1 pv_controller.go:1108] reclaimVolume[pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b]: policy is Delete
I0907 20:41:00.584221       1 pv_controller.go:1752] scheduleOperation[delete-pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b[17e52629-a7b3-4beb-ba54-aa262f2828c5]]
I0907 20:41:00.584316       1 pv_controller.go:1231] deleteVolumeOperation [pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b] started
I0907 20:41:00.584521       1 pv_protection_controller.go:205] Got event on PV pvc-0bb19ae2-6a11-4ca9-9950-620030c4d09b
... skipping 203 lines ...
I0907 20:41:35.490000       1 pv_controller.go:1108] reclaimVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: policy is Delete
I0907 20:41:35.490090       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]]
I0907 20:41:35.490185       1 pv_controller.go:1763] operation "delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]" is already running, skipping
I0907 20:41:35.489878       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543] started
I0907 20:41:35.491827       1 pv_controller.go:1340] isVolumeReleased[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is released
I0907 20:41:35.491862       1 pv_controller.go:1404] doDeleteVolume [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]
I0907 20:41:35.513546       1 pv_controller.go:1259] deletion of volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:41:35.513565       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: set phase Failed
I0907 20:41:35.513573       1 pv_controller.go:858] updating PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: set phase Failed
I0907 20:41:35.516087       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" with version 2299
I0907 20:41:35.516283       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase: Failed, bound to: "azuredisk-5194/pvc-wtxnj (uid: ad0ced58-98ed-467f-879b-3aeeb0f24543)", boundByController: true
I0907 20:41:35.516425       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is bound to claim azuredisk-5194/pvc-wtxnj
I0907 20:41:35.516563       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: claim azuredisk-5194/pvc-wtxnj not found
I0907 20:41:35.516686       1 pv_controller.go:1108] reclaimVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: policy is Delete
I0907 20:41:35.516806       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]]
I0907 20:41:35.516970       1 pv_controller.go:1763] operation "delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]" is already running, skipping
I0907 20:41:35.516360       1 pv_protection_controller.go:205] Got event on PV pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543
I0907 20:41:35.517572       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" with version 2299
I0907 20:41:35.517594       1 pv_controller.go:879] volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" entered phase "Failed"
I0907 20:41:35.517603       1 pv_controller.go:901] volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
E0907 20:41:35.517647       1 goroutinemap.go:150] Operation for "delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]" failed. No retries permitted until 2022-09-07 20:41:36.017628604 +0000 UTC m=+833.019007251 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:41:35.517723       1 event.go:291] "Event occurred" object="pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted"
I0907 20:41:40.373834       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:41:40.396967       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:41:40.397051       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 1983
I0907 20:41:40.397114       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase: Bound, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:41:40.397165       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is bound to claim azuredisk-5194/pvc-s8k78
... skipping 11 lines ...
I0907 20:41:40.397548       1 pv_controller.go:922] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: already bound to "azuredisk-5194/pvc-s8k78"
I0907 20:41:40.397557       1 pv_controller.go:858] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Bound
I0907 20:41:40.397584       1 pv_controller.go:861] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase Bound already set
I0907 20:41:40.397594       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: binding to "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa"
I0907 20:41:40.397613       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: already bound to "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa"
I0907 20:41:40.397623       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78] status: set phase Bound
I0907 20:41:40.397383       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase: Failed, bound to: "azuredisk-5194/pvc-wtxnj (uid: ad0ced58-98ed-467f-879b-3aeeb0f24543)", boundByController: true
I0907 20:41:40.397723       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is bound to claim azuredisk-5194/pvc-wtxnj
I0907 20:41:40.397756       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: claim azuredisk-5194/pvc-wtxnj not found
I0907 20:41:40.397765       1 pv_controller.go:1108] reclaimVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: policy is Delete
I0907 20:41:40.397856       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]]
I0907 20:41:40.397923       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543] started
I0907 20:41:40.397995       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78] status: phase Bound already set
I0907 20:41:40.398088       1 pv_controller.go:1038] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" bound to claim "azuredisk-5194/pvc-s8k78"
I0907 20:41:40.398197       1 pv_controller.go:1039] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:41:40.398296       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-s8k78" status after binding: phase: Bound, bound to: "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa", bindCompleted: true, boundByController: true
I0907 20:41:40.407339       1 pv_controller.go:1340] isVolumeReleased[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is released
I0907 20:41:40.407356       1 pv_controller.go:1404] doDeleteVolume [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]
I0907 20:41:40.428266       1 pv_controller.go:1259] deletion of volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:41:40.428333       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: set phase Failed
I0907 20:41:40.428447       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase Failed already set
E0907 20:41:40.428484       1 goroutinemap.go:150] Operation for "delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]" failed. No retries permitted until 2022-09-07 20:41:41.42846308 +0000 UTC m=+838.429841627 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:41:41.678282       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.8µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50948" resp=200
I0907 20:41:44.226617       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000001"
I0907 20:41:44.226643       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543 to the node "capz-l75yso-mp-0000001" mounted false
I0907 20:41:44.261404       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-l75yso-mp-0000001" succeeded. VolumesAttached: []
I0907 20:41:44.262534       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543") on node "capz-l75yso-mp-0000001" 
I0907 20:41:44.262169       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000001"
... skipping 19 lines ...
I0907 20:41:55.398140       1 pv_controller.go:1012] binding volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" to claim "azuredisk-5194/pvc-s8k78"
I0907 20:41:55.398257       1 pv_controller.go:910] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: binding to "azuredisk-5194/pvc-s8k78"
I0907 20:41:55.397970       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: all is bound
I0907 20:41:55.398378       1 pv_controller.go:858] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Bound
I0907 20:41:55.398437       1 pv_controller.go:861] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase Bound already set
I0907 20:41:55.398512       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" with version 2299
I0907 20:41:55.398589       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase: Failed, bound to: "azuredisk-5194/pvc-wtxnj (uid: ad0ced58-98ed-467f-879b-3aeeb0f24543)", boundByController: true
I0907 20:41:55.398347       1 pv_controller.go:922] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: already bound to "azuredisk-5194/pvc-s8k78"
I0907 20:41:55.398740       1 pv_controller.go:858] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Bound
I0907 20:41:55.398759       1 pv_controller.go:861] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase Bound already set
I0907 20:41:55.398767       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: binding to "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa"
I0907 20:41:55.398648       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is bound to claim azuredisk-5194/pvc-wtxnj
I0907 20:41:55.398860       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: claim azuredisk-5194/pvc-wtxnj not found
... skipping 5 lines ...
I0907 20:41:55.399225       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78] status: phase Bound already set
I0907 20:41:55.399455       1 pv_controller.go:1038] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" bound to claim "azuredisk-5194/pvc-s8k78"
I0907 20:41:55.399540       1 pv_controller.go:1039] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:41:55.399623       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-s8k78" status after binding: phase: Bound, bound to: "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa", bindCompleted: true, boundByController: true
I0907 20:41:55.406796       1 pv_controller.go:1340] isVolumeReleased[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is released
I0907 20:41:55.406818       1 pv_controller.go:1404] doDeleteVolume [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]
I0907 20:41:55.406850       1 pv_controller.go:1259] deletion of volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) since it's in attaching or detaching state
I0907 20:41:55.406865       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: set phase Failed
I0907 20:41:55.406875       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase Failed already set
E0907 20:41:55.406921       1 goroutinemap.go:150] Operation for "delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]" failed. No retries permitted until 2022-09-07 20:41:57.406884368 +0000 UTC m=+854.408263015 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) since it's in attaching or detaching state
I0907 20:41:55.429149       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:41:55.437368       1 gc_controller.go:161] GC'ing orphaned
I0907 20:41:55.437390       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:41:56.178182       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:41:59.449218       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) returned with <nil>
I0907 20:41:59.449376       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543) succeeded
... skipping 16 lines ...
I0907 20:42:10.398929       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78]: already bound to "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa"
I0907 20:42:10.398940       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78] status: set phase Bound
I0907 20:42:10.398961       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5194/pvc-s8k78] status: phase Bound already set
I0907 20:42:10.398973       1 pv_controller.go:1038] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" bound to claim "azuredisk-5194/pvc-s8k78"
I0907 20:42:10.399012       1 pv_controller.go:1039] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" status after binding: phase: Bound, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:42:10.399032       1 pv_controller.go:1040] claim "azuredisk-5194/pvc-s8k78" status after binding: phase: Bound, bound to: "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa", bindCompleted: true, boundByController: true
I0907 20:42:10.399057       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase: Failed, bound to: "azuredisk-5194/pvc-wtxnj (uid: ad0ced58-98ed-467f-879b-3aeeb0f24543)", boundByController: true
I0907 20:42:10.399096       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is bound to claim azuredisk-5194/pvc-wtxnj
I0907 20:42:10.399129       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: claim azuredisk-5194/pvc-wtxnj not found
I0907 20:42:10.399142       1 pv_controller.go:1108] reclaimVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: policy is Delete
I0907 20:42:10.399158       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]]
I0907 20:42:10.399187       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 1983
I0907 20:42:10.399230       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543] started
... skipping 10 lines ...
I0907 20:42:15.438297       1 gc_controller.go:161] GC'ing orphaned
I0907 20:42:15.438325       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:42:15.563289       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543
I0907 20:42:15.563317       1 pv_controller.go:1435] volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" deleted
I0907 20:42:15.563445       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: success
I0907 20:42:15.574708       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543" with version 2358
I0907 20:42:15.574761       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: phase: Failed, bound to: "azuredisk-5194/pvc-wtxnj (uid: ad0ced58-98ed-467f-879b-3aeeb0f24543)", boundByController: true
I0907 20:42:15.574789       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: volume is bound to claim azuredisk-5194/pvc-wtxnj
I0907 20:42:15.574808       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: claim azuredisk-5194/pvc-wtxnj not found
I0907 20:42:15.574816       1 pv_controller.go:1108] reclaimVolume[pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543]: policy is Delete
I0907 20:42:15.574846       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543[7b275279-e52c-4e4b-bd1b-f4550a42ea16]]
I0907 20:42:15.574868       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543] started
I0907 20:42:15.575076       1 pv_protection_controller.go:205] Got event on PV pvc-ad0ced58-98ed-467f-879b-3aeeb0f24543
... skipping 145 lines ...
I0907 20:42:51.052531       1 pv_controller.go:1108] reclaimVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: policy is Delete
I0907 20:42:51.052542       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]]
I0907 20:42:51.052549       1 pv_controller.go:1763] operation "delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]" is already running, skipping
I0907 20:42:51.052563       1 pv_protection_controller.go:205] Got event on PV pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa
I0907 20:42:51.058604       1 pv_controller.go:1340] isVolumeReleased[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is released
I0907 20:42:51.058623       1 pv_controller.go:1404] doDeleteVolume [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]
I0907 20:42:51.097555       1 pv_controller.go:1259] deletion of volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:42:51.097636       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Failed
I0907 20:42:51.097647       1 pv_controller.go:858] updating PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Failed
I0907 20:42:51.101279       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 2425
I0907 20:42:51.101561       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase: Failed, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:42:51.101968       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is bound to claim azuredisk-5194/pvc-s8k78
I0907 20:42:51.102231       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: claim azuredisk-5194/pvc-s8k78 not found
I0907 20:42:51.102407       1 pv_controller.go:1108] reclaimVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: policy is Delete
I0907 20:42:51.101478       1 pv_protection_controller.go:205] Got event on PV pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa
I0907 20:42:51.101911       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 2425
I0907 20:42:51.102912       1 pv_controller.go:879] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" entered phase "Failed"
I0907 20:42:51.103092       1 pv_controller.go:901] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
E0907 20:42:51.103317       1 goroutinemap.go:150] Operation for "delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]" failed. No retries permitted until 2022-09-07 20:42:51.603264233 +0000 UTC m=+908.604642780 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:42:51.102738       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]]
I0907 20:42:51.104185       1 pv_controller.go:1765] operation "delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]" postponed due to exponential backoff
I0907 20:42:51.103532       1 event.go:291] "Event occurred" object="pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted"
I0907 20:42:51.119762       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa on node "capz-l75yso-mp-0000000"
I0907 20:42:51.625220       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
I0907 20:42:51.626178       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa to the node "capz-l75yso-mp-0000000" mounted false
... skipping 8 lines ...
I0907 20:42:51.748173       1 azure_controller_vmss.go:175] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa)
I0907 20:42:55.340548       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:42:55.340548       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:42:55.376421       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:42:55.400563       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:42:55.400664       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 2425
I0907 20:42:55.400755       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase: Failed, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:42:55.400826       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is bound to claim azuredisk-5194/pvc-s8k78
I0907 20:42:55.400850       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: claim azuredisk-5194/pvc-s8k78 not found
I0907 20:42:55.400889       1 pv_controller.go:1108] reclaimVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: policy is Delete
I0907 20:42:55.400918       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]]
I0907 20:42:55.400986       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa] started
I0907 20:42:55.407748       1 pv_controller.go:1340] isVolumeReleased[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is released
I0907 20:42:55.407766       1 pv_controller.go:1404] doDeleteVolume [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]
I0907 20:42:55.407827       1 pv_controller.go:1259] deletion of volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) since it's in attaching or detaching state
I0907 20:42:55.407867       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Failed
I0907 20:42:55.407877       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase Failed already set
E0907 20:42:55.407934       1 goroutinemap.go:150] Operation for "delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]" failed. No retries permitted until 2022-09-07 20:42:56.40788628 +0000 UTC m=+913.409264827 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) since it's in attaching or detaching state
I0907 20:42:55.431129       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:42:55.439670       1 gc_controller.go:161] GC'ing orphaned
I0907 20:42:55.439829       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:42:55.462411       1 controller.go:272] Triggering nodeSync
I0907 20:42:55.462466       1 controller.go:291] nodeSync has been triggered
I0907 20:42:55.462487       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
... skipping 5 lines ...
I0907 20:43:01.678019       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="53µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45896" resp=200
I0907 20:43:06.366909       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 0 items received
I0907 20:43:07.515978       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:43:10.376982       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:43:10.401125       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:43:10.401192       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 2425
I0907 20:43:10.401245       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase: Failed, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:43:10.401278       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is bound to claim azuredisk-5194/pvc-s8k78
I0907 20:43:10.401296       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: claim azuredisk-5194/pvc-s8k78 not found
I0907 20:43:10.401305       1 pv_controller.go:1108] reclaimVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: policy is Delete
I0907 20:43:10.401322       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]]
I0907 20:43:10.401369       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa] started
I0907 20:43:10.408987       1 pv_controller.go:1340] isVolumeReleased[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is released
I0907 20:43:10.409006       1 pv_controller.go:1404] doDeleteVolume [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]
I0907 20:43:10.409152       1 pv_controller.go:1259] deletion of volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) since it's in attaching or detaching state
I0907 20:43:10.409179       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: set phase Failed
I0907 20:43:10.409189       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase Failed already set
E0907 20:43:10.409276       1 goroutinemap.go:150] Operation for "delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]" failed. No retries permitted until 2022-09-07 20:43:12.4092597 +0000 UTC m=+929.410638247 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) since it's in attaching or detaching state
I0907 20:43:11.678257       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="67.3µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55178" resp=200
I0907 20:43:12.034328       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) returned with <nil>
I0907 20:43:12.034373       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa) succeeded
I0907 20:43:12.034382       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa was detached from node:capz-l75yso-mp-0000000
I0907 20:43:12.034534       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa") on node "capz-l75yso-mp-0000000" 
I0907 20:43:12.610820       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
... skipping 4 lines ...
I0907 20:43:20.000994       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-dz6a8w" (5.4µs)
I0907 20:43:21.677912       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="68.6µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:41310" resp=200
I0907 20:43:25.322700       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Job total 0 items received
I0907 20:43:25.377454       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:43:25.401620       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:43:25.401668       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 2425
I0907 20:43:25.401704       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase: Failed, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:43:25.401762       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is bound to claim azuredisk-5194/pvc-s8k78
I0907 20:43:25.401786       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: claim azuredisk-5194/pvc-s8k78 not found
I0907 20:43:25.401813       1 pv_controller.go:1108] reclaimVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: policy is Delete
I0907 20:43:25.401831       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]]
I0907 20:43:25.401858       1 pv_controller.go:1231] deleteVolumeOperation [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa] started
I0907 20:43:25.404374       1 pv_controller.go:1340] isVolumeReleased[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is released
... skipping 3 lines ...
I0907 20:43:28.238990       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:43:29.373886       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 0 items received
I0907 20:43:30.559170       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa
I0907 20:43:30.559197       1 pv_controller.go:1435] volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" deleted
I0907 20:43:30.559208       1 pv_controller.go:1283] deleteVolumeOperation [pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: success
I0907 20:43:30.566157       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa" with version 2483
I0907 20:43:30.566201       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: phase: Failed, bound to: "azuredisk-5194/pvc-s8k78 (uid: e9e509b7-0b61-4eb5-817c-b6539449bdfa)", boundByController: true
I0907 20:43:30.566228       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: volume is bound to claim azuredisk-5194/pvc-s8k78
I0907 20:43:30.566249       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: claim azuredisk-5194/pvc-s8k78 not found
I0907 20:43:30.566262       1 pv_controller.go:1108] reclaimVolume[pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa]: policy is Delete
I0907 20:43:30.566277       1 pv_controller.go:1752] scheduleOperation[delete-pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa[3d91c6cc-2240-4580-844f-e991c7b67b7d]]
I0907 20:43:30.566322       1 pv_protection_controller.go:205] Got event on PV pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa
I0907 20:43:30.566340       1 pv_protection_controller.go:125] Processing PV pvc-e9e509b7-0b61-4eb5-817c-b6539449bdfa
... skipping 45 lines ...
I0907 20:43:32.434022       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-cxt2s]: no volume found
I0907 20:43:32.434130       1 pv_controller.go:1445] provisionClaim[azuredisk-1353/pvc-cxt2s]: started
I0907 20:43:32.434400       1 pv_controller.go:1752] scheduleOperation[provision-azuredisk-1353/pvc-cxt2s[7bfc9686-0db3-41c6-9169-21d4416a1a22]]
I0907 20:43:32.434564       1 pv_controller.go:1485] provisionClaimOperation [azuredisk-1353/pvc-cxt2s] started, class: "azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-x4lpq"
I0907 20:43:32.434706       1 pv_controller.go:1500] provisionClaimOperation [azuredisk-1353/pvc-cxt2s]: plugin name: kubernetes.io/azure-disk, provisioner name: kubernetes.io/azure-disk
I0907 20:43:32.434138       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" duration="25.290961ms"
I0907 20:43:32.435258       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-nbv4m\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:43:32.433386       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b"
I0907 20:43:32.436046       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" startTime="2022-09-07 20:43:32.436019446 +0000 UTC m=+949.437397993"
I0907 20:43:32.437650       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-nbv4m" timed out (false) [last progress check: 2022-09-07 20:43:32 +0000 UTC - now: 2022-09-07 20:43:32.437641737 +0000 UTC m=+949.439020384]
I0907 20:43:32.438189       1 pvc_protection_controller.go:353] "Got event on PVC" pvc="azuredisk-1353/pvc-cxt2s"
I0907 20:43:32.438352       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-cxt2s" with version 2505
I0907 20:43:32.438540       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-cxt2s]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
... skipping 232 lines ...
I0907 20:43:51.660956       1 controller_utils.go:938] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b-rcfnr in state Running, deletion time 2022-09-07 20:44:21 +0000 UTC
I0907 20:43:51.661014       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b" (98.599µs)
I0907 20:43:51.662494       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" duration="15.289917ms"
I0907 20:43:51.662525       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" startTime="2022-09-07 20:43:51.662509113 +0000 UTC m=+968.663887660"
I0907 20:43:51.663449       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m"
I0907 20:43:51.667123       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" duration="4.598775ms"
I0907 20:43:51.667151       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-nbv4m\": the object has been modified; please apply your changes to the latest version and try again"
I0907 20:43:51.667204       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" startTime="2022-09-07 20:43:51.667186988 +0000 UTC m=+968.668565635"
I0907 20:43:51.674135       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m"
I0907 20:43:51.674286       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" duration="7.085462ms"
I0907 20:43:51.674308       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" startTime="2022-09-07 20:43:51.674296849 +0000 UTC m=+968.675675396"
I0907 20:43:51.674614       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-nbv4m" for a progress check after 597s
I0907 20:43:51.674636       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-nbv4m" duration="328.899µs"
... skipping 2 lines ...
I0907 20:43:51.676469       1 controller_utils.go:938] Ignoring inactive pod azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b-rcfnr in state Running, deletion time 2022-09-07 20:44:21 +0000 UTC
I0907 20:43:51.676512       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b" (86.4µs)
I0907 20:43:51.676982       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9"
I0907 20:43:51.677005       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9, PodDisruptionBudget controller will avoid syncing.
I0907 20:43:51.677014       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9"
I0907 20:43:51.685474       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="61.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44064" resp=200
W0907 20:43:51.734400       1 reconciler.go:385] Multi-Attach error for volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22") from node "capz-l75yso-mp-0000000" Volume is already used by pods azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b-rcfnr on node capz-l75yso-mp-0000001
I0907 20:43:51.734631       1 event.go:291] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22\" Volume is already used by pod(s) azuredisk-volume-tester-nbv4m-759cd4878b-rcfnr"
I0907 20:43:54.730749       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:43:55.379423       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:43:55.402629       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:43:55.402683       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2514
I0907 20:43:55.402787       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-1353/pvc-cxt2s" with version 2516
I0907 20:43:55.402824       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-cxt2s]: phase: Bound, bound to: "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22", bindCompleted: true, boundByController: true
... skipping 410 lines ...
I0907 20:45:36.420765       1 pv_controller.go:1108] reclaimVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: policy is Delete
I0907 20:45:36.420774       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]]
I0907 20:45:36.420784       1 pv_controller.go:1763] operation "delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]" is already running, skipping
I0907 20:45:36.420807       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22] started
I0907 20:45:36.422331       1 pv_controller.go:1340] isVolumeReleased[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is released
I0907 20:45:36.422345       1 pv_controller.go:1404] doDeleteVolume [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]
I0907 20:45:36.443732       1 pv_controller.go:1259] deletion of volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:45:36.443756       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: set phase Failed
I0907 20:45:36.443768       1 pv_controller.go:858] updating PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: set phase Failed
I0907 20:45:36.446841       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2782
I0907 20:45:36.446876       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase: Failed, bound to: "azuredisk-1353/pvc-cxt2s (uid: 7bfc9686-0db3-41c6-9169-21d4416a1a22)", boundByController: true
I0907 20:45:36.446916       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is bound to claim azuredisk-1353/pvc-cxt2s
I0907 20:45:36.446933       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: claim azuredisk-1353/pvc-cxt2s not found
I0907 20:45:36.446942       1 pv_controller.go:1108] reclaimVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: policy is Delete
I0907 20:45:36.446955       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]]
I0907 20:45:36.446978       1 pv_controller.go:1763] operation "delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]" is already running, skipping
I0907 20:45:36.446996       1 pv_protection_controller.go:205] Got event on PV pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22
I0907 20:45:36.447572       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2782
I0907 20:45:36.447625       1 pv_controller.go:879] volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" entered phase "Failed"
I0907 20:45:36.447636       1 pv_controller.go:901] volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
E0907 20:45:36.447705       1 goroutinemap.go:150] Operation for "delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]" failed. No retries permitted until 2022-09-07 20:45:36.947681228 +0000 UTC m=+1073.949059875 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:45:36.448236       1 event.go:291] "Event occurred" object="pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted"
I0907 20:45:36.493076       1 actual_state_of_world.go:432] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 on node "capz-l75yso-mp-0000000"
I0907 20:45:40.384733       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:45:40.408135       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:45:40.408231       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2782
I0907 20:45:40.408311       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase: Failed, bound to: "azuredisk-1353/pvc-cxt2s (uid: 7bfc9686-0db3-41c6-9169-21d4416a1a22)", boundByController: true
I0907 20:45:40.408374       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is bound to claim azuredisk-1353/pvc-cxt2s
I0907 20:45:40.408408       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: claim azuredisk-1353/pvc-cxt2s not found
I0907 20:45:40.408418       1 pv_controller.go:1108] reclaimVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: policy is Delete
I0907 20:45:40.408457       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]]
I0907 20:45:40.408507       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22] started
I0907 20:45:40.415631       1 pv_controller.go:1340] isVolumeReleased[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is released
I0907 20:45:40.415650       1 pv_controller.go:1404] doDeleteVolume [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]
I0907 20:45:40.436354       1 pv_controller.go:1259] deletion of volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:45:40.436374       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: set phase Failed
I0907 20:45:40.436384       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase Failed already set
E0907 20:45:40.436433       1 goroutinemap.go:150] Operation for "delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]" failed. No retries permitted until 2022-09-07 20:45:41.436392195 +0000 UTC m=+1078.437770742 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:45:41.678531       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="61.799µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46046" resp=200
I0907 20:45:41.875866       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
I0907 20:45:41.875896       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22 to the node "capz-l75yso-mp-0000000" mounted false
I0907 20:45:41.937058       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-l75yso-mp-0000000" succeeded. VolumesAttached: []
I0907 20:45:41.937190       1 reconciler.go:221] attacherDetacher.DetachVolume started for volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22") on node "capz-l75yso-mp-0000000" 
I0907 20:45:41.937619       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
... skipping 6 lines ...
I0907 20:45:45.616848       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 20:45:51.678576       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="66.7µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45128" resp=200
I0907 20:45:52.326016       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.EndpointSlice total 0 items received
I0907 20:45:55.385006       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:45:55.408443       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:45:55.408499       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2782
I0907 20:45:55.408539       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase: Failed, bound to: "azuredisk-1353/pvc-cxt2s (uid: 7bfc9686-0db3-41c6-9169-21d4416a1a22)", boundByController: true
I0907 20:45:55.408575       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is bound to claim azuredisk-1353/pvc-cxt2s
I0907 20:45:55.408594       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: claim azuredisk-1353/pvc-cxt2s not found
I0907 20:45:55.408602       1 pv_controller.go:1108] reclaimVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: policy is Delete
I0907 20:45:55.408615       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]]
I0907 20:45:55.408645       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22] started
I0907 20:45:55.413903       1 pv_controller.go:1340] isVolumeReleased[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is released
I0907 20:45:55.413920       1 pv_controller.go:1404] doDeleteVolume [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]
I0907 20:45:55.414072       1 pv_controller.go:1259] deletion of volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) since it's in attaching or detaching state
I0907 20:45:55.414088       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: set phase Failed
I0907 20:45:55.414098       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase Failed already set
E0907 20:45:55.414204       1 goroutinemap.go:150] Operation for "delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]" failed. No retries permitted until 2022-09-07 20:45:57.414180598 +0000 UTC m=+1094.415559245 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) since it's in attaching or detaching state
I0907 20:45:55.436407       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:45:55.449767       1 gc_controller.go:161] GC'ing orphaned
I0907 20:45:55.449789       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:45:56.301315       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:45:57.314313       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) returned with <nil>
I0907 20:45:57.314451       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22) succeeded
... skipping 6 lines ...
I0907 20:46:02.007272       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-dz6a8w, uid 04caab35-7e6f-4b8e-b4d1-8d9f1ba92c3b, event type delete
I0907 20:46:04.380404       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:46:06.355941       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0907 20:46:10.385774       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:46:10.409145       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:46:10.409198       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2782
I0907 20:46:10.409284       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase: Failed, bound to: "azuredisk-1353/pvc-cxt2s (uid: 7bfc9686-0db3-41c6-9169-21d4416a1a22)", boundByController: true
I0907 20:46:10.409392       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is bound to claim azuredisk-1353/pvc-cxt2s
I0907 20:46:10.409419       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: claim azuredisk-1353/pvc-cxt2s not found
I0907 20:46:10.409428       1 pv_controller.go:1108] reclaimVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: policy is Delete
I0907 20:46:10.409444       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]]
I0907 20:46:10.409496       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22] started
I0907 20:46:10.417996       1 pv_controller.go:1340] isVolumeReleased[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is released
... skipping 8 lines ...
I0907 20:46:15.463283       1 controller.go:804] Finished updateLoadBalancerHosts
I0907 20:46:15.463289       1 controller.go:731] It took 1.77e-05 seconds to finish nodeSyncInternal
I0907 20:46:15.581845       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22
I0907 20:46:15.581890       1 pv_controller.go:1435] volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" deleted
I0907 20:46:15.582007       1 pv_controller.go:1283] deleteVolumeOperation [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: success
I0907 20:46:15.588253       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22" with version 2843
I0907 20:46:15.588294       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: phase: Failed, bound to: "azuredisk-1353/pvc-cxt2s (uid: 7bfc9686-0db3-41c6-9169-21d4416a1a22)", boundByController: true
I0907 20:46:15.588445       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: volume is bound to claim azuredisk-1353/pvc-cxt2s
I0907 20:46:15.588475       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: claim azuredisk-1353/pvc-cxt2s not found
I0907 20:46:15.588526       1 pv_controller.go:1108] reclaimVolume[pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22]: policy is Delete
I0907 20:46:15.588540       1 pv_controller.go:1752] scheduleOperation[delete-pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22[511255ab-5f8e-4787-b413-485b99a63db7]]
I0907 20:46:15.588602       1 pv_controller.go:1231] deleteVolumeOperation [pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22] started
I0907 20:46:15.588827       1 pv_protection_controller.go:205] Got event on PV pvc-7bfc9686-0db3-41c6-9169-21d4416a1a22
... skipping 81 lines ...
I0907 20:46:19.343891       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-4538/pvc-8bdks] status: phase Bound already set
I0907 20:46:19.343906       1 pv_controller.go:1038] volume "pvc-45f36294-f999-40a3-8497-0f70f52a2876" bound to claim "azuredisk-4538/pvc-8bdks"
I0907 20:46:19.343923       1 pv_controller.go:1039] volume "pvc-45f36294-f999-40a3-8497-0f70f52a2876" status after binding: phase: Bound, bound to: "azuredisk-4538/pvc-8bdks (uid: 45f36294-f999-40a3-8497-0f70f52a2876)", boundByController: true
I0907 20:46:19.343940       1 pv_controller.go:1040] claim "azuredisk-4538/pvc-8bdks" status after binding: phase: Bound, bound to: "pvc-45f36294-f999-40a3-8497-0f70f52a2876", bindCompleted: true, boundByController: true
I0907 20:46:21.170562       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1353
I0907 20:46:21.194245       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1353, name default-token-xbkw2, uid 0d9d61c9-4230-4af3-bee5-bdc960e7e774, event type delete
E0907 20:46:21.205182       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1353/default: secrets "default-token-ppnvd" is forbidden: unable to create new content in namespace azuredisk-1353 because it is being terminated
I0907 20:46:21.242964       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9.1712aed285402ced, uid a321ff93-70bd-404d-9b6d-9f5b6ae1d332, event type delete
I0907 20:46:21.245938       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9.1712aed289fa090a, uid af5f7a5e-9530-467a-80d1-7cf10d3e703e, event type delete
I0907 20:46:21.251655       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9.1712aee26ac1a436, uid 135fe8cc-ed7c-4759-b849-b1174dcda026, event type delete
I0907 20:46:21.254959       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9.1712aee308cd19ba, uid 73576433-694c-4c22-97b3-92ed848aed1b, event type delete
I0907 20:46:21.258022       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9.1712aee30b2089cc, uid a9a139dc-08dc-40a4-9833-dc8237308918, event type delete
I0907 20:46:21.261154       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-1353, name azuredisk-volume-tester-nbv4m-759cd4878b-pwkl9.1712aee3154694c9, uid 4dcb2c5e-f78f-4b1a-b385-1b22363701b8, event type delete
... skipping 193 lines ...
I0907 20:46:35.319495       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c StorageAccountType:StandardSSD_LRS Size:10
I0907 20:46:35.450876       1 gc_controller.go:161] GC'ing orphaned
I0907 20:46:35.450897       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:46:36.673158       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4538
I0907 20:46:36.697124       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-4538, name pvc-8bdks.1712aef4e6ba1f43, uid 5da71a0a-5a6c-4c98-b21c-78679ea03e16, event type delete
I0907 20:46:36.706196       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4538, name default-token-ggt2g, uid bea48b44-2fc2-4985-8b21-7be9f73c2a9d, event type delete
E0907 20:46:36.725022       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4538/default: secrets "default-token-c8jvc" is forbidden: unable to create new content in namespace azuredisk-4538 because it is being terminated
I0907 20:46:36.769005       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4538/default), service account deleted, removing tokens
I0907 20:46:36.769206       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4538, name default, uid 4f5c6585-4603-4990-943e-82b6ff9ddaeb, event type delete
I0907 20:46:36.769229       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4538" (3.2µs)
I0907 20:46:36.774071       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4538, name kube-root-ca.crt, uid a0eb844f-080b-4851-8802-2f617f2e230d, event type delete
I0907 20:46:36.776695       1 publisher.go:186] Finished syncing namespace "azuredisk-4538" (2.558687ms)
I0907 20:46:36.805240       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4538, estimate: 0, errors: <nil>
... skipping 71 lines ...
I0907 20:46:37.545977       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-k8k79] status: phase Bound already set
I0907 20:46:37.545988       1 pv_controller.go:1038] volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" bound to claim "azuredisk-59/pvc-k8k79"
I0907 20:46:37.546003       1 pv_controller.go:1039] volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
I0907 20:46:37.546018       1 pv_controller.go:1040] claim "azuredisk-59/pvc-k8k79" status after binding: phase: Bound, bound to: "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e", bindCompleted: true, boundByController: true
I0907 20:46:37.550421       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8266
I0907 20:46:37.561458       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8266, name default-token-tkcwr, uid 7851d430-4c3d-4f7a-b10c-c60d6c2fd49a, event type delete
E0907 20:46:37.573749       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8266/default: secrets "default-token-s5jh7" is forbidden: unable to create new content in namespace azuredisk-8266 because it is being terminated
I0907 20:46:37.603785       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8266/default), service account deleted, removing tokens
I0907 20:46:37.603999       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8266, name default, uid d73b0a5d-9804-46a2-a045-dae31c553f37, event type delete
I0907 20:46:37.604111       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8266" (2.7µs)
I0907 20:46:37.609789       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8266, name kube-root-ca.crt, uid cd359e64-996d-4aba-8c56-b724d7644b29, event type delete
I0907 20:46:37.613001       1 publisher.go:186] Finished syncing namespace "azuredisk-8266" (3.103084ms)
I0907 20:46:37.662078       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb StorageAccountType:StandardSSD_LRS Size:10
... skipping 132 lines ...
I0907 20:46:38.468690       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb" to node "capz-l75yso-mp-0000000".
I0907 20:46:38.469105       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" lun 0 to node "capz-l75yso-mp-0000000".
I0907 20:46:38.475682       1 azure_controller_vmss.go:101] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - attach disk(capz-l75yso-dynamic-pvc-122dfd25-9109-4bd8-9877-7ffed342a63e, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-122dfd25-9109-4bd8-9877-7ffed342a63e) with DiskEncryptionSetID()
I0907 20:46:38.515269       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4376, name kube-root-ca.crt, uid 8329a468-8ac9-473f-9529-f806242c8917, event type delete
I0907 20:46:38.517588       1 publisher.go:186] Finished syncing namespace "azuredisk-4376" (2.274988ms)
I0907 20:46:38.531656       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4376, name default-token-jh5jk, uid 8db8a1c8-dc35-43ea-b417-be0024a0051c, event type delete
E0907 20:46:38.542666       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4376/default: secrets "default-token-g9245" is forbidden: unable to create new content in namespace azuredisk-4376 because it is being terminated
I0907 20:46:38.570383       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4376/default), service account deleted, removing tokens
I0907 20:46:38.570665       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4376, name default, uid 137efdf3-9647-4dfb-a46c-7f1e3d1f8ff0, event type delete
I0907 20:46:38.570962       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (25.9µs)
I0907 20:46:38.604480       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4376, estimate: 0, errors: <nil>
I0907 20:46:38.604770       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (2.7µs)
I0907 20:46:38.613998       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4376" (191.535002ms)
... skipping 444 lines ...
I0907 20:47:27.109612       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:47:27.109685       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:47:27.109768       1 pv_controller.go:1763] operation "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" is already running, skipping
I0907 20:47:27.109845       1 pv_protection_controller.go:205] Got event on PV pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c
I0907 20:47:27.112271       1 pv_controller.go:1340] isVolumeReleased[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is released
I0907 20:47:27.112287       1 pv_controller.go:1404] doDeleteVolume [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]
I0907 20:47:27.142176       1 pv_controller.go:1259] deletion of volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:47:27.142197       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:47:27.142208       1 pv_controller.go:858] updating PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:47:27.148084       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:47:27.148197       1 pv_controller.go:879] volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" entered phase "Failed"
I0907 20:47:27.148234       1 pv_controller.go:901] volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
E0907 20:47:27.148293       1 goroutinemap.go:150] Operation for "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" failed. No retries permitted until 2022-09-07 20:47:27.648273589 +0000 UTC m=+1184.649652236 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:47:27.148122       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:47:27.148536       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:47:27.148741       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:47:27.148897       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:47:27.149036       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:47:27.149158       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:47:27.149315       1 pv_controller.go:1765] operation "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" postponed due to exponential backoff
I0907 20:47:27.148602       1 event.go:291] "Event occurred" object="pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted"
... skipping 34 lines ...
I0907 20:47:35.453385       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:47:35.640272       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000000 ReadyCondition updated. Updating timestamp.
I0907 20:47:36.564078       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.PriorityLevelConfiguration total 0 items received
I0907 20:47:40.390223       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:47:40.413787       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:47:40.413854       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:47:40.413893       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:47:40.413921       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:47:40.413936       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:47:40.413944       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:47:40.413959       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:47:40.413974       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" with version 2968
I0907 20:47:40.413990       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-122dfd25-9109-4bd8-9877-7ffed342a63e]: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
... skipping 41 lines ...
I0907 20:47:40.414709       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-k8k79] status: phase Bound already set
I0907 20:47:40.414719       1 pv_controller.go:1038] volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" bound to claim "azuredisk-59/pvc-k8k79"
I0907 20:47:40.414734       1 pv_controller.go:1039] volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
I0907 20:47:40.414747       1 pv_controller.go:1040] claim "azuredisk-59/pvc-k8k79" status after binding: phase: Bound, bound to: "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e", bindCompleted: true, boundByController: true
I0907 20:47:40.424011       1 pv_controller.go:1340] isVolumeReleased[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is released
I0907 20:47:40.424029       1 pv_controller.go:1404] doDeleteVolume [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]
I0907 20:47:40.512088       1 pv_controller.go:1259] deletion of volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:47:40.512118       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:47:40.512127       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase Failed already set
E0907 20:47:40.512288       1 goroutinemap.go:150] Operation for "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" failed. No retries permitted until 2022-09-07 20:47:41.512134587 +0000 UTC m=+1198.513513234 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:47:41.678625       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="68.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54570" resp=200
I0907 20:47:44.375422       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.NetworkPolicy total 7 items received
I0907 20:47:50.650005       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0907 20:47:51.647870       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ServiceAccount total 27 items received
I0907 20:47:51.677767       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="58.499µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:39436" resp=200
I0907 20:47:52.503723       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-122dfd25-9109-4bd8-9877-7ffed342a63e) returned with <nil>
... skipping 19 lines ...
I0907 20:47:55.415089       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: volume is bound to claim azuredisk-59/pvc-48vwm
I0907 20:47:55.415107       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: claim azuredisk-59/pvc-48vwm found: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:47:55.415121       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: all is bound
I0907 20:47:55.415130       1 pv_controller.go:858] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: set phase Bound
I0907 20:47:55.415139       1 pv_controller.go:861] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: phase Bound already set
I0907 20:47:55.415148       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:47:55.415166       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:47:55.415186       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:47:55.415206       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:47:55.415217       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:47:55.415232       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:47:55.415261       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c] started
I0907 20:47:55.414921       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-k8k79" with version 2970
... skipping 28 lines ...
I0907 20:47:55.418724       1 pv_controller.go:1038] volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" bound to claim "azuredisk-59/pvc-48vwm"
I0907 20:47:55.418845       1 pv_controller.go:1039] volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-48vwm (uid: f74c935c-c29c-403a-9c83-67f487682adb)", boundByController: true
I0907 20:47:55.418977       1 pv_controller.go:1040] claim "azuredisk-59/pvc-48vwm" status after binding: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:47:55.420834       1 pv_controller.go:1340] isVolumeReleased[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is released
I0907 20:47:55.420851       1 pv_controller.go:1404] doDeleteVolume [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]
I0907 20:47:55.439217       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:47:55.444717       1 pv_controller.go:1259] deletion of volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:47:55.444736       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:47:55.444745       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase Failed already set
E0907 20:47:55.444771       1 goroutinemap.go:150] Operation for "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" failed. No retries permitted until 2022-09-07 20:47:57.444754089 +0000 UTC m=+1214.446132636 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:47:55.453821       1 gc_controller.go:161] GC'ing orphaned
I0907 20:47:55.453839       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:47:55.463978       1 controller.go:272] Triggering nodeSync
I0907 20:47:55.463995       1 controller.go:291] nodeSync has been triggered
I0907 20:47:55.464002       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0907 20:47:55.464076       1 controller.go:804] Finished updateLoadBalancerHosts
... skipping 18 lines ...
I0907 20:48:10.416095       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: volume is bound to claim azuredisk-59/pvc-48vwm
I0907 20:48:10.416108       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: claim azuredisk-59/pvc-48vwm found: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:48:10.416120       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: all is bound
I0907 20:48:10.416127       1 pv_controller.go:858] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: set phase Bound
I0907 20:48:10.416135       1 pv_controller.go:861] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: phase Bound already set
I0907 20:48:10.416146       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:48:10.416163       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:48:10.416182       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:48:10.416202       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:48:10.416211       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:48:10.416225       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:48:10.416248       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c] started
I0907 20:48:10.416429       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-59/pvc-k8k79" with version 2970
... skipping 27 lines ...
I0907 20:48:10.419328       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-48vwm] status: phase Bound already set
I0907 20:48:10.419452       1 pv_controller.go:1038] volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" bound to claim "azuredisk-59/pvc-48vwm"
I0907 20:48:10.419541       1 pv_controller.go:1039] volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-48vwm (uid: f74c935c-c29c-403a-9c83-67f487682adb)", boundByController: true
I0907 20:48:10.419646       1 pv_controller.go:1040] claim "azuredisk-59/pvc-48vwm" status after binding: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:48:10.425019       1 pv_controller.go:1340] isVolumeReleased[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is released
I0907 20:48:10.425034       1 pv_controller.go:1404] doDeleteVolume [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]
I0907 20:48:10.448905       1 pv_controller.go:1259] deletion of volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:48:10.449066       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:48:10.449082       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase Failed already set
E0907 20:48:10.449111       1 goroutinemap.go:150] Operation for "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" failed. No retries permitted until 2022-09-07 20:48:14.449091284 +0000 UTC m=+1231.450469931 (durationBeforeRetry 4s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:48:11.677807       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="51.5µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:52600" resp=200
I0907 20:48:15.455886       1 gc_controller.go:161] GC'ing orphaned
I0907 20:48:15.455911       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:48:21.678106       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="53.899µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:59980" resp=200
I0907 20:48:22.967471       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.MutatingWebhookConfiguration total 7 items received
I0907 20:48:25.393073       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 3 lines ...
I0907 20:48:25.416796       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: volume is bound to claim azuredisk-59/pvc-48vwm
I0907 20:48:25.416817       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: claim azuredisk-59/pvc-48vwm found: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:48:25.416861       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: all is bound
I0907 20:48:25.416874       1 pv_controller.go:858] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: set phase Bound
I0907 20:48:25.416885       1 pv_controller.go:861] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: phase Bound already set
I0907 20:48:25.416904       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:48:25.416939       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:48:25.416960       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:48:25.416979       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:48:25.417007       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:48:25.417027       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:48:25.417045       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" with version 2968
I0907 20:48:25.417065       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-122dfd25-9109-4bd8-9877-7ffed342a63e]: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
... skipping 35 lines ...
I0907 20:48:25.420060       1 pv_controller.go:1038] volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" bound to claim "azuredisk-59/pvc-k8k79"
I0907 20:48:25.420141       1 pv_controller.go:1039] volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
I0907 20:48:25.420270       1 pv_controller.go:1040] claim "azuredisk-59/pvc-k8k79" status after binding: phase: Bound, bound to: "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e", bindCompleted: true, boundByController: true
I0907 20:48:25.422855       1 pv_controller.go:1340] isVolumeReleased[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is released
I0907 20:48:25.422873       1 pv_controller.go:1404] doDeleteVolume [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]
I0907 20:48:25.440247       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:48:25.445509       1 pv_controller.go:1259] deletion of volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:48:25.445528       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:48:25.445538       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase Failed already set
E0907 20:48:25.445564       1 goroutinemap.go:150] Operation for "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" failed. No retries permitted until 2022-09-07 20:48:33.445546042 +0000 UTC m=+1250.446924689 (durationBeforeRetry 8s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:48:26.364656       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:48:31.678482       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="80.6µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:53788" resp=200
I0907 20:48:35.456311       1 gc_controller.go:161] GC'ing orphaned
I0907 20:48:35.456336       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:48:36.873710       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.FlowSchema total 0 items received
I0907 20:48:38.834829       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-q99gik" (16.9µs)
... skipping 13 lines ...
I0907 20:48:40.417635       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: volume is bound to claim azuredisk-59/pvc-48vwm
I0907 20:48:40.417701       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: claim azuredisk-59/pvc-48vwm found: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:48:40.417750       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: all is bound
I0907 20:48:40.417812       1 pv_controller.go:858] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: set phase Bound
I0907 20:48:40.417841       1 pv_controller.go:861] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: phase Bound already set
I0907 20:48:40.417921       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:48:40.417990       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:48:40.418018       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:48:40.418036       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:48:40.418044       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:48:40.418059       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:48:40.418086       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c] started
I0907 20:48:40.417259       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-k8k79]: volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" found: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
... skipping 25 lines ...
I0907 20:48:40.418651       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-59/pvc-48vwm] status: phase Bound already set
I0907 20:48:40.418664       1 pv_controller.go:1038] volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" bound to claim "azuredisk-59/pvc-48vwm"
I0907 20:48:40.418683       1 pv_controller.go:1039] volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" status after binding: phase: Bound, bound to: "azuredisk-59/pvc-48vwm (uid: f74c935c-c29c-403a-9c83-67f487682adb)", boundByController: true
I0907 20:48:40.418699       1 pv_controller.go:1040] claim "azuredisk-59/pvc-48vwm" status after binding: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:48:40.424115       1 pv_controller.go:1340] isVolumeReleased[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is released
I0907 20:48:40.424132       1 pv_controller.go:1404] doDeleteVolume [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]
I0907 20:48:40.446607       1 pv_controller.go:1259] deletion of volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:48:40.446627       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: set phase Failed
I0907 20:48:40.446636       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase Failed already set
E0907 20:48:40.446680       1 goroutinemap.go:150] Operation for "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" failed. No retries permitted until 2022-09-07 20:48:56.44664383 +0000 UTC m=+1273.448022477 (durationBeforeRetry 16s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:48:41.678468       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="51.7µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:48674" resp=200
I0907 20:48:43.022306       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb) returned with <nil>
I0907 20:48:43.022493       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb) succeeded
I0907 20:48:43.022512       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb was detached from node:capz-l75yso-mp-0000000
I0907 20:48:43.022642       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-f74c935c-c29c-403a-9c83-67f487682adb" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f74c935c-c29c-403a-9c83-67f487682adb") on node "capz-l75yso-mp-0000000" 
I0907 20:48:43.022585       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-l75yso-mp-0000000, refreshing the cache
... skipping 8 lines ...
I0907 20:48:55.417409       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: volume is bound to claim azuredisk-59/pvc-48vwm
I0907 20:48:55.417426       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: claim azuredisk-59/pvc-48vwm found: phase: Bound, bound to: "pvc-f74c935c-c29c-403a-9c83-67f487682adb", bindCompleted: true, boundByController: true
I0907 20:48:55.417445       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: all is bound
I0907 20:48:55.417455       1 pv_controller.go:858] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: set phase Bound
I0907 20:48:55.417464       1 pv_controller.go:861] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: phase Bound already set
I0907 20:48:55.417477       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:48:55.417500       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:48:55.417518       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:48:55.417536       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:48:55.417544       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:48:55.417557       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:48:55.417565       1 pv_controller.go:1765] operation "delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]" postponed due to exponential backoff
I0907 20:48:55.417579       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" with version 2968
... skipping 61 lines ...
I0907 20:49:10.417747       1 pv_controller.go:858] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: set phase Bound
I0907 20:49:10.417756       1 pv_controller.go:861] updating PersistentVolume[pvc-f74c935c-c29c-403a-9c83-67f487682adb]: phase Bound already set
I0907 20:49:10.417761       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-k8k79]: volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" found: phase: Bound, bound to: "azuredisk-59/pvc-k8k79 (uid: 122dfd25-9109-4bd8-9877-7ffed342a63e)", boundByController: true
I0907 20:49:10.417767       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3095
I0907 20:49:10.417770       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-59/pvc-k8k79]: claim is already correctly bound
I0907 20:49:10.417780       1 pv_controller.go:1012] binding volume "pvc-122dfd25-9109-4bd8-9877-7ffed342a63e" to claim "azuredisk-59/pvc-k8k79"
I0907 20:49:10.417786       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:49:10.417789       1 pv_controller.go:910] updating PersistentVolume[pvc-122dfd25-9109-4bd8-9877-7ffed342a63e]: binding to "azuredisk-59/pvc-k8k79"
I0907 20:49:10.417804       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:49:10.417804       1 pv_controller.go:922] updating PersistentVolume[pvc-122dfd25-9109-4bd8-9877-7ffed342a63e]: already bound to "azuredisk-59/pvc-k8k79"
I0907 20:49:10.417811       1 pv_controller.go:858] updating PersistentVolume[pvc-122dfd25-9109-4bd8-9877-7ffed342a63e]: set phase Bound
I0907 20:49:10.417820       1 pv_controller.go:861] updating PersistentVolume[pvc-122dfd25-9109-4bd8-9877-7ffed342a63e]: phase Bound already set
I0907 20:49:10.417822       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
... skipping 37 lines ...
I0907 20:49:15.457365       1 gc_controller.go:161] GC'ing orphaned
I0907 20:49:15.457391       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:49:15.615327       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c
I0907 20:49:15.615359       1 pv_controller.go:1435] volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" deleted
I0907 20:49:15.615372       1 pv_controller.go:1283] deleteVolumeOperation [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: success
I0907 20:49:15.620433       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c" with version 3255
I0907 20:49:15.620469       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: phase: Failed, bound to: "azuredisk-59/pvc-n764p (uid: ae5b9383-eb59-41e4-bbe8-7c719f41db1c)", boundByController: true
I0907 20:49:15.620687       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: volume is bound to claim azuredisk-59/pvc-n764p
I0907 20:49:15.620711       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: claim azuredisk-59/pvc-n764p not found
I0907 20:49:15.620719       1 pv_controller.go:1108] reclaimVolume[pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c]: policy is Delete
I0907 20:49:15.620734       1 pv_controller.go:1752] scheduleOperation[delete-pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c[59e58281-d8c5-4e98-ae7f-955c7f07923e]]
I0907 20:49:15.620761       1 pv_controller.go:1231] deleteVolumeOperation [pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c] started
I0907 20:49:15.620934       1 pv_protection_controller.go:205] Got event on PV pvc-ae5b9383-eb59-41e4-bbe8-7c719f41db1c
... skipping 559 lines ...
I0907 20:50:09.686486       1 pv_controller.go:1108] reclaimVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: policy is Delete
I0907 20:50:09.686496       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]]
I0907 20:50:09.686505       1 pv_controller.go:1763] operation "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" is already running, skipping
I0907 20:50:09.686337       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c18050d4-e577-47c9-955e-4f8784c48835] started
I0907 20:50:09.688109       1 pv_controller.go:1340] isVolumeReleased[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is released
I0907 20:50:09.688245       1 pv_controller.go:1404] doDeleteVolume [pvc-c18050d4-e577-47c9-955e-4f8784c48835]
I0907 20:50:09.710511       1 pv_controller.go:1259] deletion of volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:09.710535       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: set phase Failed
I0907 20:50:09.710543       1 pv_controller.go:858] updating PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: set phase Failed
I0907 20:50:09.713974       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3420
I0907 20:50:09.714223       1 pv_controller.go:879] volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" entered phase "Failed"
I0907 20:50:09.714340       1 pv_controller.go:901] volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:09.714592       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3420
E0907 20:50:09.714649       1 goroutinemap.go:150] Operation for "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" failed. No retries permitted until 2022-09-07 20:50:10.214369162 +0000 UTC m=+1347.215747709 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:09.714906       1 event.go:291] "Event occurred" object="pvc-c18050d4-e577-47c9-955e-4f8784c48835" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted"
I0907 20:50:09.714929       1 pv_protection_controller.go:205] Got event on PV pvc-c18050d4-e577-47c9-955e-4f8784c48835
I0907 20:50:09.715049       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase: Failed, bound to: "azuredisk-2546/pvc-n7dh7 (uid: c18050d4-e577-47c9-955e-4f8784c48835)", boundByController: true
I0907 20:50:09.715153       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is bound to claim azuredisk-2546/pvc-n7dh7
I0907 20:50:09.715243       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: claim azuredisk-2546/pvc-n7dh7 not found
I0907 20:50:09.715485       1 pv_controller.go:1108] reclaimVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: policy is Delete
I0907 20:50:09.715510       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]]
I0907 20:50:09.715520       1 pv_controller.go:1765] operation "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" postponed due to exponential backoff
I0907 20:50:10.344094       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 63 items received
I0907 20:50:10.396859       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:50:10.419368       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:50:10.419503       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3420
I0907 20:50:10.419540       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase: Failed, bound to: "azuredisk-2546/pvc-n7dh7 (uid: c18050d4-e577-47c9-955e-4f8784c48835)", boundByController: true
I0907 20:50:10.419572       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is bound to claim azuredisk-2546/pvc-n7dh7
I0907 20:50:10.419594       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: claim azuredisk-2546/pvc-n7dh7 not found
I0907 20:50:10.419602       1 pv_controller.go:1108] reclaimVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: policy is Delete
I0907 20:50:10.419616       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]]
I0907 20:50:10.419632       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" with version 3332
I0907 20:50:10.419652       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: phase: Bound, bound to: "azuredisk-2546/pvc-pq4ml (uid: caf29a28-d3a9-4ce6-a8d7-10a0e911546d)", boundByController: true
... skipping 18 lines ...
I0907 20:50:10.420343       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml] status: phase Bound already set
I0907 20:50:10.420354       1 pv_controller.go:1038] volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" bound to claim "azuredisk-2546/pvc-pq4ml"
I0907 20:50:10.420373       1 pv_controller.go:1039] volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-pq4ml (uid: caf29a28-d3a9-4ce6-a8d7-10a0e911546d)", boundByController: true
I0907 20:50:10.420440       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-pq4ml" status after binding: phase: Bound, bound to: "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d", bindCompleted: true, boundByController: true
I0907 20:50:10.422497       1 pv_controller.go:1340] isVolumeReleased[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is released
I0907 20:50:10.422528       1 pv_controller.go:1404] doDeleteVolume [pvc-c18050d4-e577-47c9-955e-4f8784c48835]
I0907 20:50:10.443984       1 pv_controller.go:1259] deletion of volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:10.444004       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: set phase Failed
I0907 20:50:10.444013       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase Failed already set
E0907 20:50:10.444040       1 goroutinemap.go:150] Operation for "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" failed. No retries permitted until 2022-09-07 20:50:11.444021932 +0000 UTC m=+1348.445400579 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:11.678550       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="64.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:38906" resp=200
I0907 20:50:14.329501       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 1 items received
I0907 20:50:14.557189       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000001"
I0907 20:50:14.557428       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835 to the node "capz-l75yso-mp-0000001" mounted false
I0907 20:50:14.557562       1 actual_state_of_world.go:398] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d to the node "capz-l75yso-mp-0000001" mounted false
I0907 20:50:14.636270       1 node_status_updater.go:136] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"1\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835\"}]}}" for node "capz-l75yso-mp-0000001" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835 1}]
... skipping 17 lines ...
I0907 20:50:15.458839       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:50:15.660135       1 node_lifecycle_controller.go:1047] Node capz-l75yso-mp-0000001 ReadyCondition updated. Updating timestamp.
I0907 20:50:21.678503       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="74.299µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:44202" resp=200
I0907 20:50:25.396967       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:50:25.420459       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:50:25.420600       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3420
I0907 20:50:25.420642       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase: Failed, bound to: "azuredisk-2546/pvc-n7dh7 (uid: c18050d4-e577-47c9-955e-4f8784c48835)", boundByController: true
I0907 20:50:25.420603       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-pq4ml" with version 3335
I0907 20:50:25.420774       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml]: phase: Bound, bound to: "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d", bindCompleted: true, boundByController: true
I0907 20:50:25.420873       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml]: volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" found: phase: Bound, bound to: "azuredisk-2546/pvc-pq4ml (uid: caf29a28-d3a9-4ce6-a8d7-10a0e911546d)", boundByController: true
I0907 20:50:25.420940       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml]: claim is already correctly bound
I0907 20:50:25.420955       1 pv_controller.go:1012] binding volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" to claim "azuredisk-2546/pvc-pq4ml"
I0907 20:50:25.420967       1 pv_controller.go:910] updating PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: binding to "azuredisk-2546/pvc-pq4ml"
... skipping 19 lines ...
I0907 20:50:25.421757       1 pv_controller.go:1038] volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" bound to claim "azuredisk-2546/pvc-pq4ml"
I0907 20:50:25.422114       1 pv_controller.go:1039] volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" status after binding: phase: Bound, bound to: "azuredisk-2546/pvc-pq4ml (uid: caf29a28-d3a9-4ce6-a8d7-10a0e911546d)", boundByController: true
I0907 20:50:25.422140       1 pv_controller.go:1040] claim "azuredisk-2546/pvc-pq4ml" status after binding: phase: Bound, bound to: "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d", bindCompleted: true, boundByController: true
I0907 20:50:25.428625       1 pv_controller.go:1340] isVolumeReleased[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is released
I0907 20:50:25.428643       1 pv_controller.go:1404] doDeleteVolume [pvc-c18050d4-e577-47c9-955e-4f8784c48835]
I0907 20:50:25.442964       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:50:25.451726       1 pv_controller.go:1259] deletion of volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:25.451745       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: set phase Failed
I0907 20:50:25.451754       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase Failed already set
E0907 20:50:25.451781       1 goroutinemap.go:150] Operation for "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" failed. No retries permitted until 2022-09-07 20:50:27.451762293 +0000 UTC m=+1364.453140840 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_1), could not be deleted
I0907 20:50:26.411066       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:50:26.912995       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PriorityClass total 3 items received
I0907 20:50:27.372694       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Role total 3 items received
I0907 20:50:30.025074       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d) returned with <nil>
I0907 20:50:30.025313       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d) succeeded
I0907 20:50:30.025396       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d was detached from node:capz-l75yso-mp-0000001
... skipping 31 lines ...
I0907 20:50:40.422297       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: volume is bound to claim azuredisk-2546/pvc-pq4ml
I0907 20:50:40.422389       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: claim azuredisk-2546/pvc-pq4ml found: phase: Bound, bound to: "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d", bindCompleted: true, boundByController: true
I0907 20:50:40.422438       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: all is bound
I0907 20:50:40.422449       1 pv_controller.go:858] updating PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: set phase Bound
I0907 20:50:40.422477       1 pv_controller.go:861] updating PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: phase Bound already set
I0907 20:50:40.422510       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3420
I0907 20:50:40.422532       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase: Failed, bound to: "azuredisk-2546/pvc-n7dh7 (uid: c18050d4-e577-47c9-955e-4f8784c48835)", boundByController: true
I0907 20:50:40.422565       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is bound to claim azuredisk-2546/pvc-n7dh7
I0907 20:50:40.422583       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: claim azuredisk-2546/pvc-n7dh7 not found
I0907 20:50:40.422590       1 pv_controller.go:1108] reclaimVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: policy is Delete
I0907 20:50:40.422605       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]]
I0907 20:50:40.422667       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c18050d4-e577-47c9-955e-4f8784c48835] started
I0907 20:50:40.427859       1 pv_controller.go:1340] isVolumeReleased[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is released
I0907 20:50:40.427877       1 pv_controller.go:1404] doDeleteVolume [pvc-c18050d4-e577-47c9-955e-4f8784c48835]
I0907 20:50:40.427929       1 pv_controller.go:1259] deletion of volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) since it's in attaching or detaching state
I0907 20:50:40.427941       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: set phase Failed
I0907 20:50:40.427950       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase Failed already set
E0907 20:50:40.427994       1 goroutinemap.go:150] Operation for "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" failed. No retries permitted until 2022-09-07 20:50:44.427957602 +0000 UTC m=+1381.429336249 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) since it's in attaching or detaching state
I0907 20:50:41.678073       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="69.199µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:45112" resp=200
I0907 20:50:45.401955       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000001) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) returned with <nil>
I0907 20:50:45.402010       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835) succeeded
I0907 20:50:45.402020       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835 was detached from node:capz-l75yso-mp-0000001
I0907 20:50:45.402227       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835") on node "capz-l75yso-mp-0000001" 
I0907 20:50:51.679216       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="65.7µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34232" resp=200
I0907 20:50:53.881066       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-btcmtu" (12.3µs)
I0907 20:50:55.398492       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:50:55.422036       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:50:55.422161       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3420
I0907 20:50:55.422198       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase: Failed, bound to: "azuredisk-2546/pvc-n7dh7 (uid: c18050d4-e577-47c9-955e-4f8784c48835)", boundByController: true
I0907 20:50:55.422250       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-2546/pvc-pq4ml" with version 3335
I0907 20:50:55.422326       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml]: phase: Bound, bound to: "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d", bindCompleted: true, boundByController: true
I0907 20:50:55.422405       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml]: volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" found: phase: Bound, bound to: "azuredisk-2546/pvc-pq4ml (uid: caf29a28-d3a9-4ce6-a8d7-10a0e911546d)", boundByController: true
I0907 20:50:55.422422       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-2546/pvc-pq4ml]: claim is already correctly bound
I0907 20:50:55.422459       1 pv_controller.go:1012] binding volume "pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d" to claim "azuredisk-2546/pvc-pq4ml"
I0907 20:50:55.422469       1 pv_controller.go:910] updating PersistentVolume[pvc-caf29a28-d3a9-4ce6-a8d7-10a0e911546d]: binding to "azuredisk-2546/pvc-pq4ml"
... skipping 27 lines ...
I0907 20:50:56.424296       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:50:59.156655       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0907 20:51:00.656732       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-c18050d4-e577-47c9-955e-4f8784c48835
I0907 20:51:00.656761       1 pv_controller.go:1435] volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" deleted
I0907 20:51:00.656793       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c18050d4-e577-47c9-955e-4f8784c48835]: success
I0907 20:51:00.663806       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c18050d4-e577-47c9-955e-4f8784c48835" with version 3497
I0907 20:51:00.663889       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: phase: Failed, bound to: "azuredisk-2546/pvc-n7dh7 (uid: c18050d4-e577-47c9-955e-4f8784c48835)", boundByController: true
I0907 20:51:00.663916       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: volume is bound to claim azuredisk-2546/pvc-n7dh7
I0907 20:51:00.663935       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: claim azuredisk-2546/pvc-n7dh7 not found
I0907 20:51:00.663958       1 pv_controller.go:1108] reclaimVolume[pvc-c18050d4-e577-47c9-955e-4f8784c48835]: policy is Delete
I0907 20:51:00.663972       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]]
I0907 20:51:00.663979       1 pv_controller.go:1763] operation "delete-pvc-c18050d4-e577-47c9-955e-4f8784c48835[128d189b-024f-4a7b-9bc8-28b263d7484f]" is already running, skipping
I0907 20:51:00.664062       1 pv_protection_controller.go:205] Got event on PV pvc-c18050d4-e577-47c9-955e-4f8784c48835
... skipping 233 lines ...
I0907 20:51:21.090705       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-ndw5c] status: set phase Bound
I0907 20:51:21.090888       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-ndw5c] status: phase Bound already set
I0907 20:51:21.090982       1 pv_controller.go:1038] volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" bound to claim "azuredisk-8582/pvc-ndw5c"
I0907 20:51:21.091128       1 pv_controller.go:1039] volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-ndw5c (uid: de250ca5-0b39-45fb-b3f3-32c21dc4fb37)", boundByController: true
I0907 20:51:21.091249       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-ndw5c" status after binding: phase: Bound, bound to: "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37", bindCompleted: true, boundByController: true
I0907 20:51:21.094402       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-2546, name default-token-sdwld, uid b3882df0-83cf-4c71-9bdb-76e01b4ea452, event type delete
E0907 20:51:21.108654       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-2546/default: secrets "default-token-6fdvn" is forbidden: unable to create new content in namespace azuredisk-2546 because it is being terminated
I0907 20:51:21.111165       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-2ldx4.1712af2470cced86, uid d613dc96-5eb5-43df-8582-397f96945062, event type delete
I0907 20:51:21.113358       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-2ldx4.1712af26db3fbfa2, uid 3a9d2a0a-8dbf-4749-bce9-3c15cbfcca8c, event type delete
I0907 20:51:21.116125       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-2ldx4.1712af294240f0a4, uid ed522fca-fe1c-4b83-9f67-47a8a993b8d7, event type delete
I0907 20:51:21.122491       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-2ldx4.1712af297bf69b32, uid 91723385-8e07-4b1a-85a0-959954d605f6, event type delete
I0907 20:51:21.125464       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-2ldx4.1712af297bf6dc6e, uid 1b308e5c-8db9-4a79-b1c0-187031b5c1d2, event type delete
I0907 20:51:21.128067       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-2546, name azuredisk-volume-tester-2ldx4.1712af29a4a66158, uid 3558a11b-8594-44a9-8e60-9c8af5505073, event type delete
... skipping 552 lines ...
I0907 20:52:02.671778       1 pv_controller.go:1404] doDeleteVolume [pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]
I0907 20:52:02.696547       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349 from node "capz-l75yso-mp-0000000"
I0907 20:52:02.696802       1 azure_controller_vmss.go:145] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349"
I0907 20:52:02.696822       1 azure_controller_vmss.go:175] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349)
I0907 20:52:02.696581       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37 from node "capz-l75yso-mp-0000000"
I0907 20:52:02.696631       1 azure_controller_common.go:224] detach /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b from node "capz-l75yso-mp-0000000"
I0907 20:52:02.701286       1 pv_controller.go:1259] deletion of volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:52:02.701306       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: set phase Failed
I0907 20:52:02.701315       1 pv_controller.go:858] updating PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: set phase Failed
I0907 20:52:02.705088       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" with version 3700
I0907 20:52:02.705288       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: phase: Failed, bound to: "azuredisk-8582/pvc-px5qh (uid: 3b785d4f-c370-44f3-bfbe-6f2be38c5349)", boundByController: true
I0907 20:52:02.705415       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: volume is bound to claim azuredisk-8582/pvc-px5qh
I0907 20:52:02.705145       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" with version 3700
I0907 20:52:02.705627       1 pv_controller.go:879] volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" entered phase "Failed"
I0907 20:52:02.705651       1 pv_controller.go:901] volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
E0907 20:52:02.705732       1 goroutinemap.go:150] Operation for "delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]" failed. No retries permitted until 2022-09-07 20:52:03.205671901 +0000 UTC m=+1460.207050548 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted
I0907 20:52:02.705162       1 pv_protection_controller.go:205] Got event on PV pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349
I0907 20:52:02.705544       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: claim azuredisk-8582/pvc-px5qh not found
I0907 20:52:02.706023       1 pv_controller.go:1108] reclaimVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: policy is Delete
I0907 20:52:02.706287       1 pv_controller.go:1752] scheduleOperation[delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]]
I0907 20:52:02.706385       1 pv_controller.go:1765] operation "delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]" postponed due to exponential backoff
I0907 20:52:02.706139       1 event.go:291] "Event occurred" object="pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/virtualMachineScaleSets/capz-l75yso-mp-0/virtualMachines/capz-l75yso-mp-0_0), could not be deleted"
... skipping 37 lines ...
I0907 20:52:10.424855       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: volume is bound to claim azuredisk-8582/pvc-ndw5c
I0907 20:52:10.424872       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: claim azuredisk-8582/pvc-ndw5c found: phase: Bound, bound to: "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37", bindCompleted: true, boundByController: true
I0907 20:52:10.424904       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: all is bound
I0907 20:52:10.424913       1 pv_controller.go:858] updating PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: set phase Bound
I0907 20:52:10.424922       1 pv_controller.go:861] updating PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: phase Bound already set
I0907 20:52:10.424933       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" with version 3700
I0907 20:52:10.424951       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: phase: Failed, bound to: "azuredisk-8582/pvc-px5qh (uid: 3b785d4f-c370-44f3-bfbe-6f2be38c5349)", boundByController: true
I0907 20:52:10.425005       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: volume is bound to claim azuredisk-8582/pvc-px5qh
I0907 20:52:10.425024       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: claim azuredisk-8582/pvc-px5qh not found
I0907 20:52:10.425031       1 pv_controller.go:1108] reclaimVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: policy is Delete
I0907 20:52:10.425063       1 pv_controller.go:1752] scheduleOperation[delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]]
I0907 20:52:10.425083       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3598
I0907 20:52:10.425101       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase: Bound, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
... skipping 2 lines ...
I0907 20:52:10.425144       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: all is bound
I0907 20:52:10.425150       1 pv_controller.go:858] updating PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: set phase Bound
I0907 20:52:10.425160       1 pv_controller.go:861] updating PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase Bound already set
I0907 20:52:10.425204       1 pv_controller.go:1231] deleteVolumeOperation [pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349] started
I0907 20:52:10.430193       1 pv_controller.go:1340] isVolumeReleased[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: volume is released
I0907 20:52:10.430212       1 pv_controller.go:1404] doDeleteVolume [pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]
I0907 20:52:10.430244       1 pv_controller.go:1259] deletion of volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349) since it's in attaching or detaching state
I0907 20:52:10.430259       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: set phase Failed
I0907 20:52:10.430270       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: phase Failed already set
E0907 20:52:10.430308       1 goroutinemap.go:150] Operation for "delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]" failed. No retries permitted until 2022-09-07 20:52:11.430280901 +0000 UTC m=+1468.431659548 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349) since it's in attaching or detaching state
I0907 20:52:11.677901       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="57.8µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46610" resp=200
I0907 20:52:12.353291       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 4 items received
I0907 20:52:15.462814       1 gc_controller.go:161] GC'ing orphaned
I0907 20:52:15.462838       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:52:17.277571       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 4 items received
I0907 20:52:21.677857       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.899µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54554" resp=200
... skipping 12 lines ...
I0907 20:52:25.425325       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: volume is bound to claim azuredisk-8582/pvc-ndw5c
I0907 20:52:25.425372       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: claim azuredisk-8582/pvc-ndw5c found: phase: Bound, bound to: "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37", bindCompleted: true, boundByController: true
I0907 20:52:25.425388       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: all is bound
I0907 20:52:25.425397       1 pv_controller.go:858] updating PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: set phase Bound
I0907 20:52:25.425407       1 pv_controller.go:861] updating PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: phase Bound already set
I0907 20:52:25.425446       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" with version 3700
I0907 20:52:25.425467       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: phase: Failed, bound to: "azuredisk-8582/pvc-px5qh (uid: 3b785d4f-c370-44f3-bfbe-6f2be38c5349)", boundByController: true
I0907 20:52:25.425500       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: volume is bound to claim azuredisk-8582/pvc-px5qh
I0907 20:52:25.425543       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: claim azuredisk-8582/pvc-px5qh not found
I0907 20:52:25.425130       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-8582/pvc-f62bz" with version 3600
I0907 20:52:25.425581       1 pv_controller.go:1108] reclaimVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: policy is Delete
I0907 20:52:25.425634       1 pv_controller.go:1752] scheduleOperation[delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]]
I0907 20:52:25.425639       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-8582/pvc-f62bz]: phase: Bound, bound to: "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b", bindCompleted: true, boundByController: true
... skipping 42 lines ...
I0907 20:52:26.498507       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
I0907 20:52:28.761969       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRoleBinding total 4 items received
I0907 20:52:30.660358       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349
I0907 20:52:30.660385       1 pv_controller.go:1435] volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" deleted
I0907 20:52:30.660398       1 pv_controller.go:1283] deleteVolumeOperation [pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: success
I0907 20:52:30.667710       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349" with version 3741
I0907 20:52:30.667749       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: phase: Failed, bound to: "azuredisk-8582/pvc-px5qh (uid: 3b785d4f-c370-44f3-bfbe-6f2be38c5349)", boundByController: true
I0907 20:52:30.667774       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: volume is bound to claim azuredisk-8582/pvc-px5qh
I0907 20:52:30.667793       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: claim azuredisk-8582/pvc-px5qh not found
I0907 20:52:30.667907       1 pv_controller.go:1108] reclaimVolume[pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349]: policy is Delete
I0907 20:52:30.668044       1 pv_controller.go:1752] scheduleOperation[delete-pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349[71df780a-cb8e-46b8-9d0d-7c00c2faa0ca]]
I0907 20:52:30.668147       1 pv_controller.go:1231] deleteVolumeOperation [pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349] started
I0907 20:52:30.667849       1 pv_protection_controller.go:205] Got event on PV pvc-3b785d4f-c370-44f3-bfbe-6f2be38c5349
... skipping 48 lines ...
I0907 20:52:33.239797       1 pv_controller.go:1108] reclaimVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: policy is Delete
I0907 20:52:33.239823       1 pv_controller.go:1752] scheduleOperation[delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]]
I0907 20:52:33.239869       1 pv_controller.go:1763] operation "delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]" is already running, skipping
I0907 20:52:33.239982       1 pv_controller.go:1231] deleteVolumeOperation [pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37] started
I0907 20:52:33.241525       1 pv_controller.go:1340] isVolumeReleased[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: volume is released
I0907 20:52:33.241543       1 pv_controller.go:1404] doDeleteVolume [pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]
I0907 20:52:33.241574       1 pv_controller.go:1259] deletion of volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37) since it's in attaching or detaching state
I0907 20:52:33.241588       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: set phase Failed
I0907 20:52:33.241596       1 pv_controller.go:858] updating PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: set phase Failed
I0907 20:52:33.244414       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" with version 3750
I0907 20:52:33.244455       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: phase: Failed, bound to: "azuredisk-8582/pvc-ndw5c (uid: de250ca5-0b39-45fb-b3f3-32c21dc4fb37)", boundByController: true
I0907 20:52:33.244477       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: volume is bound to claim azuredisk-8582/pvc-ndw5c
I0907 20:52:33.244496       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: claim azuredisk-8582/pvc-ndw5c not found
I0907 20:52:33.244506       1 pv_controller.go:1108] reclaimVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: policy is Delete
I0907 20:52:33.244518       1 pv_controller.go:1752] scheduleOperation[delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]]
I0907 20:52:33.244530       1 pv_controller.go:1763] operation "delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]" is already running, skipping
I0907 20:52:33.244546       1 pv_protection_controller.go:205] Got event on PV pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37
I0907 20:52:33.244678       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" with version 3750
I0907 20:52:33.244700       1 pv_controller.go:879] volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" entered phase "Failed"
I0907 20:52:33.244710       1 pv_controller.go:901] volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37) since it's in attaching or detaching state
E0907 20:52:33.244746       1 goroutinemap.go:150] Operation for "delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]" failed. No retries permitted until 2022-09-07 20:52:33.744726928 +0000 UTC m=+1490.746105575 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37) since it's in attaching or detaching state
I0907 20:52:33.244923       1 event.go:291] "Event occurred" object="pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37) since it's in attaching or detaching state"
I0907 20:52:33.377676       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 9 items received
I0907 20:52:34.786201       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-btcmtu" (12.8µs)
I0907 20:52:35.123402       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-btcmtu" (5.3µs)
I0907 20:52:35.463482       1 gc_controller.go:161] GC'ing orphaned
I0907 20:52:35.463507       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:52:38.217178       1 azure_controller_vmss.go:187] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37) returned with <nil>
... skipping 20 lines ...
I0907 20:52:40.425435       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8582/pvc-f62bz]: already bound to "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b"
I0907 20:52:40.425444       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8582/pvc-f62bz] status: set phase Bound
I0907 20:52:40.425465       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8582/pvc-f62bz] status: phase Bound already set
I0907 20:52:40.425477       1 pv_controller.go:1038] volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" bound to claim "azuredisk-8582/pvc-f62bz"
I0907 20:52:40.425494       1 pv_controller.go:1039] volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" status after binding: phase: Bound, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
I0907 20:52:40.425511       1 pv_controller.go:1040] claim "azuredisk-8582/pvc-f62bz" status after binding: phase: Bound, bound to: "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b", bindCompleted: true, boundByController: true
I0907 20:52:40.425536       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: phase: Failed, bound to: "azuredisk-8582/pvc-ndw5c (uid: de250ca5-0b39-45fb-b3f3-32c21dc4fb37)", boundByController: true
I0907 20:52:40.425556       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: volume is bound to claim azuredisk-8582/pvc-ndw5c
I0907 20:52:40.425576       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: claim azuredisk-8582/pvc-ndw5c not found
I0907 20:52:40.425584       1 pv_controller.go:1108] reclaimVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: policy is Delete
I0907 20:52:40.425598       1 pv_controller.go:1752] scheduleOperation[delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]]
I0907 20:52:40.425629       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3598
I0907 20:52:40.425648       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase: Bound, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
... skipping 8 lines ...
I0907 20:52:41.678509       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="71.7µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:57176" resp=200
I0907 20:52:43.361266       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Lease total 633 items received
I0907 20:52:45.591699       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37
I0907 20:52:45.591729       1 pv_controller.go:1435] volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" deleted
I0907 20:52:45.591740       1 pv_controller.go:1283] deleteVolumeOperation [pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: success
I0907 20:52:45.598517       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37" with version 3772
I0907 20:52:45.598764       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: phase: Failed, bound to: "azuredisk-8582/pvc-ndw5c (uid: de250ca5-0b39-45fb-b3f3-32c21dc4fb37)", boundByController: true
I0907 20:52:45.599048       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: volume is bound to claim azuredisk-8582/pvc-ndw5c
I0907 20:52:45.599152       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: claim azuredisk-8582/pvc-ndw5c not found
I0907 20:52:45.599220       1 pv_controller.go:1108] reclaimVolume[pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37]: policy is Delete
I0907 20:52:45.599307       1 pv_controller.go:1752] scheduleOperation[delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]]
I0907 20:52:45.599359       1 pv_controller.go:1763] operation "delete-pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37[0f7590e2-350a-4bd9-b0fb-46daf3effbb7]" is already running, skipping
I0907 20:52:45.599454       1 pv_protection_controller.go:205] Got event on PV pvc-de250ca5-0b39-45fb-b3f3-32c21dc4fb37
... skipping 47 lines ...
I0907 20:52:48.822492       1 pv_controller.go:1108] reclaimVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: policy is Delete
I0907 20:52:48.822502       1 pv_controller.go:1752] scheduleOperation[delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]]
I0907 20:52:48.822514       1 pv_controller.go:1763] operation "delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]" is already running, skipping
I0907 20:52:48.822554       1 pv_controller.go:1231] deleteVolumeOperation [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b] started
I0907 20:52:48.824060       1 pv_controller.go:1340] isVolumeReleased[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is released
I0907 20:52:48.824076       1 pv_controller.go:1404] doDeleteVolume [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]
I0907 20:52:48.824148       1 pv_controller.go:1259] deletion of volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b) since it's in attaching or detaching state
I0907 20:52:48.824161       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: set phase Failed
I0907 20:52:48.824174       1 pv_controller.go:858] updating PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: set phase Failed
I0907 20:52:48.826752       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3782
I0907 20:52:48.826834       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase: Failed, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
I0907 20:52:48.826858       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is bound to claim azuredisk-8582/pvc-f62bz
I0907 20:52:48.826900       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: claim azuredisk-8582/pvc-f62bz not found
I0907 20:52:48.826911       1 pv_controller.go:1108] reclaimVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: policy is Delete
I0907 20:52:48.826938       1 pv_controller.go:1752] scheduleOperation[delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]]
I0907 20:52:48.826951       1 pv_controller.go:1763] operation "delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]" is already running, skipping
I0907 20:52:48.826969       1 pv_protection_controller.go:205] Got event on PV pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b
I0907 20:52:48.827094       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3782
I0907 20:52:48.827120       1 pv_controller.go:879] volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" entered phase "Failed"
I0907 20:52:48.827144       1 pv_controller.go:901] volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" changed status to "Failed": failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b) since it's in attaching or detaching state
E0907 20:52:48.827197       1 goroutinemap.go:150] Operation for "delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]" failed. No retries permitted until 2022-09-07 20:52:49.327163014 +0000 UTC m=+1506.328541561 (durationBeforeRetry 500ms). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b) since it's in attaching or detaching state
I0907 20:52:48.827462       1 event.go:291] "Event occurred" object="pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b) since it's in attaching or detaching state"
I0907 20:52:51.677851       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="59.2µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:46670" resp=200
I0907 20:52:52.370010       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 8 items received
I0907 20:52:55.341757       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:52:55.341757       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:52:55.410088       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:52:55.425433       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:52:55.425499       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3782
I0907 20:52:55.425554       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase: Failed, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
I0907 20:52:55.425591       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is bound to claim azuredisk-8582/pvc-f62bz
I0907 20:52:55.425615       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: claim azuredisk-8582/pvc-f62bz not found
I0907 20:52:55.425625       1 pv_controller.go:1108] reclaimVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: policy is Delete
I0907 20:52:55.425641       1 pv_controller.go:1752] scheduleOperation[delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]]
I0907 20:52:55.425684       1 pv_controller.go:1231] deleteVolumeOperation [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b] started
I0907 20:52:55.432013       1 pv_controller.go:1340] isVolumeReleased[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is released
I0907 20:52:55.432031       1 pv_controller.go:1404] doDeleteVolume [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]
I0907 20:52:55.432062       1 pv_controller.go:1259] deletion of volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b) since it's in attaching or detaching state
I0907 20:52:55.432074       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: set phase Failed
I0907 20:52:55.432084       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase Failed already set
E0907 20:52:55.432111       1 goroutinemap.go:150] Operation for "delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]" failed. No retries permitted until 2022-09-07 20:52:56.43209267 +0000 UTC m=+1513.433471317 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b) since it's in attaching or detaching state
I0907 20:52:55.446293       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:52:55.464548       1 gc_controller.go:161] GC'ing orphaned
I0907 20:52:55.464572       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:52:55.466703       1 controller.go:272] Triggering nodeSync
I0907 20:52:55.466729       1 controller.go:291] nodeSync has been triggered
I0907 20:52:55.466737       1 controller.go:788] Running updateLoadBalancerHosts(len(services)==0, workers==1)
... skipping 8 lines ...
I0907 20:52:58.520775       1 operation_generator.go:486] DetachVolume.Detach succeeded for volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b") on node "capz-l75yso-mp-0000000" 
I0907 20:53:01.681019       1 httplog.go:109] "HTTP" verb="GET" URI="/healthz" latency="75.799µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:36072" resp=200
I0907 20:53:07.811124       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 10 items received
I0907 20:53:10.410589       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0907 20:53:10.425800       1 pv_controller_base.go:528] resyncing PV controller
I0907 20:53:10.425868       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3782
I0907 20:53:10.425981       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase: Failed, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
I0907 20:53:10.426076       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is bound to claim azuredisk-8582/pvc-f62bz
I0907 20:53:10.426167       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: claim azuredisk-8582/pvc-f62bz not found
I0907 20:53:10.426203       1 pv_controller.go:1108] reclaimVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: policy is Delete
I0907 20:53:10.426221       1 pv_controller.go:1752] scheduleOperation[delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]]
I0907 20:53:10.426325       1 pv_controller.go:1231] deleteVolumeOperation [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b] started
I0907 20:53:10.433314       1 pv_controller.go:1340] isVolumeReleased[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is released
... skipping 2 lines ...
I0907 20:53:15.465583       1 gc_controller.go:161] GC'ing orphaned
I0907 20:53:15.465608       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0907 20:53:15.616712       1 azure_managedDiskController.go:253] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b
I0907 20:53:15.616743       1 pv_controller.go:1435] volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" deleted
I0907 20:53:15.616754       1 pv_controller.go:1283] deleteVolumeOperation [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: success
I0907 20:53:15.624006       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b" with version 3823
I0907 20:53:15.624049       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: phase: Failed, bound to: "azuredisk-8582/pvc-f62bz (uid: 18e6c66a-2a03-4244-af83-4629ec4dc92b)", boundByController: true
I0907 20:53:15.624077       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: volume is bound to claim azuredisk-8582/pvc-f62bz
I0907 20:53:15.624096       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: claim azuredisk-8582/pvc-f62bz not found
I0907 20:53:15.624108       1 pv_controller.go:1108] reclaimVolume[pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b]: policy is Delete
I0907 20:53:15.624123       1 pv_controller.go:1752] scheduleOperation[delete-pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b[634c2b63-8a44-48f6-a1bb-abd508ba1ed6]]
I0907 20:53:15.624146       1 pv_controller.go:1231] deleteVolumeOperation [pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b] started
I0907 20:53:15.624314       1 pv_protection_controller.go:205] Got event on PV pvc-18e6c66a-2a03-4244-af83-4629ec4dc92b
... skipping 51 lines ...
I0907 20:53:23.931156       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-7051/pvc-j6mqr" with version 3872
I0907 20:53:23.933992       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-l75yso-dynamic-pvc-bde5b013-e07c-4b7d-ad50-dcadf6245550 StorageAccountType:Standard_LRS Size:10
I0907 20:53:24.443602       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8582
I0907 20:53:24.503068       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8582, name kube-root-ca.crt, uid 7a861e4d-80a4-4b99-8516-892ebf3b218e, event type delete
I0907 20:53:24.508722       1 publisher.go:186] Finished syncing namespace "azuredisk-8582" (5.481359ms)
I0907 20:53:24.520189       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8582, name default-token-qg7z9, uid b09117e6-e64a-44a1-b562-f55133635edc, event type delete
E0907 20:53:24.535876       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8582/default: secrets "default-token-tjp7l" is forbidden: unable to create new content in namespace azuredisk-8582 because it is being terminated
I0907 20:53:24.550986       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-g7msh.1712af3b52c7fa78, uid 77e392b9-e006-4f04-bb9c-0032de95fe5f, event type delete
I0907 20:53:24.553947       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-g7msh.1712af3e06cae72d, uid 907e9c70-d3f5-4050-a9e3-401f42b3385c, event type delete
I0907 20:53:24.557079       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-g7msh.1712af40bc0feeba, uid 311886c7-016a-47f4-924e-258c2214d5e1, event type delete
I0907 20:53:24.559789       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-g7msh.1712af4327050720, uid 8d8a4ad0-2215-4d71-b382-cebc307889fe, event type delete
I0907 20:53:24.564261       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-g7msh.1712af43c0b73a36, uid d2e7abb9-6024-4609-958d-4f9563d6775b, event type delete
I0907 20:53:24.567371       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8582, name azuredisk-volume-tester-g7msh.1712af43c3f3f382, uid db2131ba-f939-4bb0-aabc-c4b7f2af9fce, event type delete
... skipping 87 lines ...
I0907 20:53:26.262599       1 pv_controller.go:1038] volume "pvc-bde5b013-e07c-4b7d-ad50-dcadf6245550" bound to claim "azuredisk-7051/pvc-j6mqr"
I0907 20:53:26.262666       1 pv_controller.go:1039] volume "pvc-bde5b013-e07c-4b7d-ad50-dcadf6245550" status after binding: phase: Bound, bound to: "azuredisk-7051/pvc-j6mqr (uid: bde5b013-e07c-4b7d-ad50-dcadf6245550)", boundByController: true
I0907 20:53:26.262683       1 pv_controller.go:1040] claim "azuredisk-7051/pvc-j6mqr" status after binding: phase: Bound, bound to: "pvc-bde5b013-e07c-4b7d-ad50-dcadf6245550", bindCompleted: true, boundByController: true
I0907 20:53:26.283136       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3086, name default-token-r4wrp, uid 7290104b-98cf-4901-930d-0ea3fd8ef62c, event type delete
I0907 20:53:26.292852       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3086, name default, uid 284073bb-0d9a-4be6-b32c-5cfa6233c538, event type delete
I0907 20:53:26.293207       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2.4µs)
E0907 20:53:26.295953       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3086/default: secrets "default-token-mxwk8" is forbidden: unable to create new content in namespace azuredisk-3086 because it is being terminated
I0907 20:53:26.296330       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3086/default), service account deleted, removing tokens
I0907 20:53:26.348317       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3086, name kube-root-ca.crt, uid c537b6e0-c8e8-4d79-9057-145cc3213ccc, event type delete
I0907 20:53:26.352393       1 publisher.go:186] Finished syncing namespace "azuredisk-3086" (3.686273ms)
I0907 20:53:26.376876       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3086" (2.3µs)
I0907 20:53:26.377242       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3086, estimate: 0, errors: <nil>
I0907 20:53:26.384721       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3086" (141.766444ms)
... skipping 484 lines ...
I0907 20:55:04.034171       1 reconciler.go:304] attacherDetacher.AttachVolume started for volume "pvc-f520ce69-6e47-4e08-8575-e02ad68739c3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3") from node "capz-l75yso-mp-0000000" 
I0907 20:55:04.034407       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3" to node "capz-l75yso-mp-0000000".
I0907 20:55:04.071357       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3" lun 0 to node "capz-l75yso-mp-0000000".
I0907 20:55:04.071399       1 azure_controller_vmss.go:101] azureDisk - update(capz-l75yso): vm(capz-l75yso-mp-0000000) - attach disk(capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3) with DiskEncryptionSetID()
I0907 20:55:05.024529       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-7051
I0907 20:55:05.055617       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-7051, name default-token-qvrfz, uid a8b6606e-83f7-4ea7-92af-1b70939f5770, event type delete
E0907 20:55:05.066389       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-7051/default: secrets "default-token-bc2p7" is forbidden: unable to create new content in namespace azuredisk-7051 because it is being terminated
I0907 20:55:05.072010       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-7051, name kube-root-ca.crt, uid 7ca3ee39-f682-48f3-97f7-17eefdfcdf96, event type delete
I0907 20:55:05.075629       1 publisher.go:186] Finished syncing namespace "azuredisk-7051" (3.485875ms)
I0907 20:55:05.088956       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-q45p7.1712af58765cb253, uid b32c8b6d-1dd8-4b17-a289-e693962a9ef3, event type delete
I0907 20:55:05.092134       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-q45p7.1712af5c08c62590, uid a24a75b8-0757-4474-8052-799bb0abfbac, event type delete
I0907 20:55:05.095030       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-q45p7.1712af5cd30c8ce6, uid 9b83dd1f-9608-48cd-bb3b-e385887c57da, event type delete
I0907 20:55:05.098145       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-7051, name azuredisk-volume-tester-q45p7.1712af5cd61dc786, uid fa0e2e30-2dff-49dd-8c9e-883f1ddaeca0, event type delete
... skipping 258 lines ...
I0907 20:55:51.922101       1 stateful_set_control.go:376] StatefulSet azuredisk-9183/azuredisk-volume-tester-wwt8n has 1 unhealthy Pods starting with azuredisk-volume-tester-wwt8n-0
I0907 20:55:51.922237       1 stateful_set_control.go:451] StatefulSet azuredisk-9183/azuredisk-volume-tester-wwt8n is waiting for Pod azuredisk-volume-tester-wwt8n-0 to be Running and Ready
I0907 20:55:51.922251       1 stateful_set_control.go:112] StatefulSet azuredisk-9183/azuredisk-volume-tester-wwt8n pod status replicas=1 ready=0 current=1 updated=1
I0907 20:55:51.922258       1 stateful_set_control.go:120] StatefulSet azuredisk-9183/azuredisk-volume-tester-wwt8n revisions current=azuredisk-volume-tester-wwt8n-7dc4c9c847 update=azuredisk-volume-tester-wwt8n-7dc4c9c847
I0907 20:55:51.922268       1 stateful_set.go:477] Successfully synced StatefulSet azuredisk-9183/azuredisk-volume-tester-wwt8n successful
I0907 20:55:51.922276       1 stateful_set.go:431] Finished syncing statefulset "azuredisk-9183/azuredisk-volume-tester-wwt8n" (1.872388ms)
W0907 20:55:51.958497       1 reconciler.go:344] Multi-Attach error for volume "pvc-f520ce69-6e47-4e08-8575-e02ad68739c3" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-l75yso/providers/Microsoft.Compute/disks/capz-l75yso-dynamic-pvc-f520ce69-6e47-4e08-8575-e02ad68739c3") from node "capz-l75yso-mp-0000001" Volume is already exclusively attached to node capz-l75yso-mp-0000000 and can't be attached to another
I0907 20:55:51.958679       1 event.go:291] "Event occurred" object="azuredisk-9183/azuredisk-volume-tester-wwt8n-0" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-f520ce69-6e47-4e08-8575-e02ad68739c3\" Volume is already exclusively attached to one node and can't be attached to another"
I0907 20:55:52.000223       1 secrets.go:73] Expired bootstrap token in kube-system/bootstrap-token-l1mcal Secret: 2022-09-07T20:55:52Z
I0907 20:55:52.000724       1 tokencleaner.go:194] Deleting expired secret kube-system/bootstrap-token-l1mcal
I0907 20:55:52.004434       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-l1mcal" (4.257672ms)
I0907 20:55:52.004983       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace kube-system, name bootstrap-token-l1mcal, uid e95e0e5f-6ac4-40e7-9ab9-c66a92f471bb, event type delete
I0907 20:55:52.260921       1 azure_vmss.go:186] Couldn't find VMSS VM with nodeName capz-l75yso-mp-0000000, refreshing the cache
I0907 20:55:52.492140       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-l75yso-mp-0000000"
... skipping 250 lines ...
I0907 20:56:30.694060       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1166
I0907 20:56:30.737990       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1166, name kube-root-ca.crt, uid 4b0d64e5-f11e-4711-85e3-b11037ffb2a6, event type delete
I0907 20:56:30.740589       1 publisher.go:186] Finished syncing namespace "azuredisk-1166" (2.331785ms)
I0907 20:56:30.775449       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1166, name default-token-r5bfl, uid 49e4dc24-c52a-4489-85d2-f3fddb1f2f06, event type delete
I0907 20:56:30.784798       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1166, name default, uid c8321d31-173c-445d-81a2-7a3d7bca4570, event type delete
I0907 20:56:30.785017       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1166" (2.9µs)
E0907 20:56:30.792867       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1166/default: secrets "default-token-gz99x" is forbidden: unable to create new content in namespace azuredisk-1166 because it is being terminated
I0907 20:56:30.792934       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1166/default), service account deleted, removing tokens
I0907 20:56:30.837214       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1166" (3µs)
I0907 20:56:30.837715       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-1166, estimate: 0, errors: <nil>
I0907 20:56:30.853577       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-1166" (163.536746ms)
I0907 20:56:31.615952       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4415
I0907 20:56:31.673395       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4415, name kube-root-ca.crt, uid 6cb14da9-8007-48f7-8b85-a16482b1e685, event type delete
... skipping 15 lines ...
I0907 20:56:32.318097       1 publisher.go:186] Finished syncing namespace "azuredisk-6720" (1.52109ms)
I0907 20:56:32.329744       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-6720" (2.1µs)
I0907 20:56:32.331413       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-6720, estimate: 0, errors: <nil>
I0907 20:56:32.337694       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-6720" (218.975189ms)
I0907 20:56:32.631067       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4162
I0907 20:56:32.667289       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4162, name default-token-ghfw5, uid 2c72ce98-394a-4454-b3be-9fb472797c0c, event type delete
E0907 20:56:32.690240       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4162/default: secrets "default-token-cjhkp" is forbidden: unable to create new content in namespace azuredisk-4162 because it is being terminated
2022/09/07 20:56:33 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1410.158 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 37 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-st5kt, container manager
STEP: Dumping workload cluster default/capz-l75yso logs
Sep  7 20:58:15.606: INFO: Collecting logs for Linux node capz-l75yso-control-plane-9pv9f in cluster capz-l75yso in namespace default

Sep  7 20:59:15.608: INFO: Collecting boot logs for AzureMachine capz-l75yso-control-plane-9pv9f

Failed to get logs for machine capz-l75yso-control-plane-pgb27, cluster default/capz-l75yso: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  7 20:59:16.481: INFO: Collecting logs for Linux node capz-l75yso-mp-0000000 in cluster capz-l75yso in namespace default

Sep  7 21:00:16.483: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-l75yso-mp-0

Sep  7 21:00:17.021: INFO: Collecting logs for Linux node capz-l75yso-mp-0000001 in cluster capz-l75yso in namespace default

Sep  7 21:01:17.023: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-l75yso-mp-0

Failed to get logs for machine pool capz-l75yso-mp-0, cluster default/capz-l75yso: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-l75yso kube-system pod logs
STEP: Fetching kube-system pod logs took 718.402834ms
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-l75yso-control-plane-9pv9f, container kube-apiserver
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-969cf87c4-z5zrg, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/etcd-capz-l75yso-control-plane-9pv9f
STEP: Creating log watcher for controller kube-system/calico-node-j8hh7, container calico-node
... skipping 41 lines ...