This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 11 succeeded
Started2021-09-11 17:53
Elapsed1h9m
Revisionmain

Test Failures


AzureDisk CSI Driver End-to-End Tests Dynamic Provisioning [single-az] should create a pod with multiple volumes [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 15m34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=AzureDisk\sCSI\sDriver\sEnd\-to\-End\sTests\sDynamic\sProvisioning\s\[single\-az\]\sshould\screate\sa\spod\swith\smultiple\svolumes\s\[kubernetes\.io\/azure\-disk\]\s\[disk\.csi\.azure\.com\]\s\[Windows\]$'
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:694
Unexpected error:
    <*errors.StatusError | 0xc00061f0e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"azuredisk-volume-tester-hr88k\" not found",
            Reason: "NotFound",
            Details: {
                Name: "azuredisk-volume-tester-hr88k",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "azuredisk-volume-tester-hr88k" not found
occurred
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:730
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Show 11 Passed Tests

Show 41 Skipped Tests

Error lines from build-log.txt

... skipping 694 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! kubectl get secrets | grep capz-4tyuov-kubeconfig; do sleep 1; done"
capz-4tyuov-kubeconfig                 cluster.x-k8s.io/secret               1      0s
# Get kubeconfig and store it locally.
kubectl get secrets capz-4tyuov-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! kubectl --kubeconfig=./kubeconfig get nodes | grep master; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-4tyuov-control-plane-gzjnv   NotReady   control-plane,master   4s    v1.22.2-rc.0.32+b68064208b29e5
run "kubectl --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s) and 2 worker machine(s) to become Ready
node/capz-4tyuov-control-plane-gzjnv condition met
node/capz-4tyuov-md-0-pxbpw condition met
... skipping 33 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep 11 18:05:40.777: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-rqjmf" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep 11 18:05:40.812: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 35.249932ms
Sep 11 18:05:42.847: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070389364s
Sep 11 18:05:44.881: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104264625s
Sep 11 18:05:46.916: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139005909s
Sep 11 18:05:48.950: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173072436s
Sep 11 18:05:50.983: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206653821s
... skipping 418 lines ...
Sep 11 18:20:03.558: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 14m22.781376026s
Sep 11 18:20:05.593: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 14m24.816063663s
Sep 11 18:20:07.626: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 14m26.849458768s
Sep 11 18:20:09.660: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Pending", Reason="", readiness=false. Elapsed: 14m28.883636786s
Sep 11 18:20:11.696: INFO: Pod "azuredisk-volume-tester-rqjmf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14m30.919480701s
STEP: Saw pod success
Sep 11 18:20:11.696: INFO: Pod "azuredisk-volume-tester-rqjmf" satisfied condition "Succeeded or Failed"
Sep 11 18:20:11.696: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-rqjmf"
Sep 11 18:20:11.751: INFO: Pod azuredisk-volume-tester-rqjmf has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-rqjmf in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
Sep 11 18:20:11.858: INFO: deleting PVC "azuredisk-8081"/"pvc-lpcxl"
Sep 11 18:20:11.858: INFO: Deleting PersistentVolumeClaim "pvc-lpcxl"
STEP: waiting for claim's PV "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" to be deleted
Sep 11 18:20:11.894: INFO: Waiting up to 10m0s for PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 to get deleted
Sep 11 18:20:11.926: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Released (32.075428ms)
Sep 11 18:20:16.960: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Failed (5.065800815s)
Sep 11 18:20:21.995: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Failed (10.100850187s)
Sep 11 18:20:27.030: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Failed (15.13562277s)
Sep 11 18:20:32.065: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Failed (20.170686945s)
Sep 11 18:20:37.100: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Failed (25.206007312s)
Sep 11 18:20:42.133: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 found and phase=Failed (30.238358519s)
Sep 11 18:20:47.166: INFO: PersistentVolume pvc-597412fa-4aa3-4792-8db7-e403acebc4c7 was removed
Sep 11 18:20:47.166: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8081 to be removed
Sep 11 18:20:47.198: INFO: Claim "azuredisk-8081" in namespace "pvc-lpcxl" doesn't exist in the system
Sep 11 18:20:47.198: INFO: deleting StorageClass azuredisk-8081-kubernetes.io-azure-disk-dynamic-sc-wm4f9
Sep 11 18:20:47.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-8081" for this suite.
... skipping 77 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep 11 18:21:12.341: INFO: deleting Pod "azuredisk-3274"/"azuredisk-volume-tester-kcfv6"
Sep 11 18:21:12.375: INFO: Error getting logs for pod azuredisk-volume-tester-kcfv6: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-kcfv6)
STEP: Deleting pod azuredisk-volume-tester-kcfv6 in namespace azuredisk-3274
STEP: validating provisioned PV
STEP: checking the PV
Sep 11 18:21:12.484: INFO: deleting PVC "azuredisk-3274"/"pvc-267l6"
Sep 11 18:21:12.484: INFO: Deleting PersistentVolumeClaim "pvc-267l6"
STEP: waiting for claim's PV "pvc-916be236-f188-40bc-8105-a85ea39238d6" to be deleted
... skipping 16 lines ...
Sep 11 18:22:28.061: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Bound (1m15.543702146s)
Sep 11 18:22:33.094: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Bound (1m20.576394526s)
Sep 11 18:22:38.129: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Bound (1m25.611637061s)
Sep 11 18:22:43.165: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Bound (1m30.647411426s)
Sep 11 18:22:48.198: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Bound (1m35.680229668s)
Sep 11 18:22:53.231: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Bound (1m40.713434626s)
Sep 11 18:22:58.264: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (1m45.746369046s)
Sep 11 18:23:03.296: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (1m50.778927511s)
Sep 11 18:23:08.331: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (1m55.813192323s)
Sep 11 18:23:13.367: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (2m0.849180845s)
Sep 11 18:23:18.401: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (2m5.884093485s)
Sep 11 18:23:23.436: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (2m10.918897265s)
Sep 11 18:23:28.473: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 found and phase=Failed (2m15.955973296s)
Sep 11 18:23:33.506: INFO: PersistentVolume pvc-916be236-f188-40bc-8105-a85ea39238d6 was removed
Sep 11 18:23:33.506: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-3274 to be removed
Sep 11 18:23:33.538: INFO: Claim "azuredisk-3274" in namespace "pvc-267l6" doesn't exist in the system
Sep 11 18:23:33.538: INFO: deleting StorageClass azuredisk-3274-kubernetes.io-azure-disk-dynamic-sc-jsmzp
Sep 11 18:23:33.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-3274" for this suite.
... skipping 21 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep 11 18:23:34.595: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-qz8ng" in namespace "azuredisk-495" to be "Succeeded or Failed"
Sep 11 18:23:34.634: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 39.360022ms
Sep 11 18:23:36.668: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072954875s
Sep 11 18:23:38.704: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108534641s
Sep 11 18:23:40.736: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141194611s
Sep 11 18:23:42.769: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 8.173548578s
Sep 11 18:23:44.801: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206195221s
... skipping 2 lines ...
Sep 11 18:23:50.900: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 16.305191926s
Sep 11 18:23:52.934: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 18.338901097s
Sep 11 18:23:54.966: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 20.371422204s
Sep 11 18:23:57.001: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Pending", Reason="", readiness=false. Elapsed: 22.406126646s
Sep 11 18:23:59.035: INFO: Pod "azuredisk-volume-tester-qz8ng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.439728306s
STEP: Saw pod success
Sep 11 18:23:59.035: INFO: Pod "azuredisk-volume-tester-qz8ng" satisfied condition "Succeeded or Failed"
Sep 11 18:23:59.035: INFO: deleting Pod "azuredisk-495"/"azuredisk-volume-tester-qz8ng"
Sep 11 18:23:59.088: INFO: Pod azuredisk-volume-tester-qz8ng has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-qz8ng in namespace azuredisk-495
STEP: validating provisioned PV
STEP: checking the PV
Sep 11 18:23:59.193: INFO: deleting PVC "azuredisk-495"/"pvc-kwh65"
Sep 11 18:23:59.193: INFO: Deleting PersistentVolumeClaim "pvc-kwh65"
STEP: waiting for claim's PV "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" to be deleted
Sep 11 18:23:59.229: INFO: Waiting up to 10m0s for PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa to get deleted
Sep 11 18:23:59.262: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Released (32.417848ms)
Sep 11 18:24:04.299: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Failed (5.069425821s)
Sep 11 18:24:09.336: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Failed (10.106398635s)
Sep 11 18:24:14.372: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Failed (15.142511089s)
Sep 11 18:24:19.408: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Failed (20.178764513s)
Sep 11 18:24:24.445: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Failed (25.215732786s)
Sep 11 18:24:29.478: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa found and phase=Failed (30.24885105s)
Sep 11 18:24:34.511: INFO: PersistentVolume pvc-c23469cf-15b1-48ad-afa2-bc64548890fa was removed
Sep 11 18:24:34.511: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-495 to be removed
Sep 11 18:24:34.543: INFO: Claim "azuredisk-495" in namespace "pvc-kwh65" doesn't exist in the system
Sep 11 18:24:34.543: INFO: deleting StorageClass azuredisk-495-kubernetes.io-azure-disk-dynamic-sc-lx2m5
Sep 11 18:24:34.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-495" for this suite.
... skipping 21 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep 11 18:24:35.517: INFO: Waiting up to 10m0s for pod "azuredisk-volume-tester-gfnbv" in namespace "azuredisk-9947" to be "Error status code"
Sep 11 18:24:35.549: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 31.805732ms
Sep 11 18:24:37.583: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065775872s
Sep 11 18:24:39.616: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099302632s
Sep 11 18:24:41.656: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139249971s
Sep 11 18:24:43.689: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 8.171971279s
Sep 11 18:24:45.723: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205727765s
Sep 11 18:24:47.757: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.239627292s
Sep 11 18:24:49.790: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 14.272730744s
Sep 11 18:24:51.823: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 16.306250533s
Sep 11 18:24:53.857: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 18.340127136s
Sep 11 18:24:55.891: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 20.373756169s
Sep 11 18:24:57.933: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Pending", Reason="", readiness=false. Elapsed: 22.416306232s
Sep 11 18:24:59.966: INFO: Pod "azuredisk-volume-tester-gfnbv": Phase="Failed", Reason="", readiness=false. Elapsed: 24.449236913s
STEP: Saw pod failure
Sep 11 18:24:59.966: INFO: Pod "azuredisk-volume-tester-gfnbv" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep 11 18:25:00.000: INFO: deleting Pod "azuredisk-9947"/"azuredisk-volume-tester-gfnbv"
Sep 11 18:25:00.034: INFO: Pod azuredisk-volume-tester-gfnbv has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-gfnbv in namespace azuredisk-9947
STEP: validating provisioned PV
STEP: checking the PV
Sep 11 18:25:00.141: INFO: deleting PVC "azuredisk-9947"/"pvc-clnt4"
Sep 11 18:25:00.141: INFO: Deleting PersistentVolumeClaim "pvc-clnt4"
STEP: waiting for claim's PV "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" to be deleted
Sep 11 18:25:00.177: INFO: Waiting up to 10m0s for PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd to get deleted
Sep 11 18:25:00.215: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Released (38.409968ms)
Sep 11 18:25:05.249: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Failed (5.071756911s)
Sep 11 18:25:10.282: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Failed (10.105333087s)
Sep 11 18:25:15.315: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Failed (15.137901421s)
Sep 11 18:25:20.347: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Failed (20.170116354s)
Sep 11 18:25:25.380: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Failed (25.202893344s)
Sep 11 18:25:30.413: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd found and phase=Failed (30.2360651s)
Sep 11 18:25:35.447: INFO: PersistentVolume pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd was removed
Sep 11 18:25:35.447: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-9947 to be removed
Sep 11 18:25:35.479: INFO: Claim "azuredisk-9947" in namespace "pvc-clnt4" doesn't exist in the system
Sep 11 18:25:35.479: INFO: deleting StorageClass azuredisk-9947-kubernetes.io-azure-disk-dynamic-sc-cf6dp
Sep 11 18:25:35.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-9947" for this suite.
... skipping 52 lines ...
Sep 11 18:26:34.312: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Bound (5.070335039s)
Sep 11 18:26:39.345: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Bound (10.102897433s)
Sep 11 18:26:44.382: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Bound (15.139593061s)
Sep 11 18:26:49.417: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Bound (20.175358671s)
Sep 11 18:26:54.454: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Bound (25.21181576s)
Sep 11 18:26:59.489: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Bound (30.246797405s)
Sep 11 18:27:04.525: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Failed (35.2831542s)
Sep 11 18:27:09.558: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Failed (40.315640584s)
Sep 11 18:27:14.591: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Failed (45.349018132s)
Sep 11 18:27:19.624: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Failed (50.381632763s)
Sep 11 18:27:24.657: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Failed (55.415126388s)
Sep 11 18:27:29.690: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc found and phase=Failed (1m0.447627786s)
Sep 11 18:27:34.722: INFO: PersistentVolume pvc-03d622fc-f747-4a7b-9193-5d2251c785bc was removed
Sep 11 18:27:34.723: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Sep 11 18:27:34.756: INFO: Claim "azuredisk-5541" in namespace "pvc-xrh5p" doesn't exist in the system
Sep 11 18:27:34.756: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-k69jd
Sep 11 18:27:34.791: INFO: deleting Pod "azuredisk-5541"/"azuredisk-volume-tester-r65dk"
Sep 11 18:27:34.838: INFO: Pod azuredisk-volume-tester-r65dk has the following logs: 
... skipping 8 lines ...
Sep 11 18:27:40.043: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Bound (5.066346831s)
Sep 11 18:27:45.076: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Bound (10.098666492s)
Sep 11 18:27:50.110: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Bound (15.132912628s)
Sep 11 18:27:55.142: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Bound (20.165506909s)
Sep 11 18:28:00.175: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Bound (25.198347635s)
Sep 11 18:28:05.208: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Bound (30.230880324s)
Sep 11 18:28:10.241: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (35.264385285s)
Sep 11 18:28:15.274: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (40.297490463s)
Sep 11 18:28:20.308: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (45.331392991s)
Sep 11 18:28:25.341: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (50.364395943s)
Sep 11 18:28:30.375: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (55.397631132s)
Sep 11 18:28:35.408: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (1m0.431019975s)
Sep 11 18:28:40.441: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (1m5.463590411s)
Sep 11 18:28:45.474: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d found and phase=Failed (1m10.497208754s)
Sep 11 18:28:50.508: INFO: PersistentVolume pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d was removed
Sep 11 18:28:50.508: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Sep 11 18:28:50.539: INFO: Claim "azuredisk-5541" in namespace "pvc-x6p5m" doesn't exist in the system
Sep 11 18:28:50.539: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-r95hz
Sep 11 18:28:50.573: INFO: deleting Pod "azuredisk-5541"/"azuredisk-volume-tester-zkxrx"
Sep 11 18:28:50.617: INFO: Pod azuredisk-volume-tester-zkxrx has the following logs: 
... skipping 8 lines ...
Sep 11 18:28:55.814: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Bound (5.065079329s)
Sep 11 18:29:00.847: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Bound (10.097872629s)
Sep 11 18:29:05.881: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Bound (15.131256091s)
Sep 11 18:29:10.913: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Bound (20.164154651s)
Sep 11 18:29:15.946: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Bound (25.197017399s)
Sep 11 18:29:20.980: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Bound (30.230949708s)
Sep 11 18:29:26.013: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Failed (35.263830998s)
Sep 11 18:29:31.047: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Failed (40.297542312s)
Sep 11 18:29:36.080: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Failed (45.330827235s)
Sep 11 18:29:41.114: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Failed (50.36455202s)
Sep 11 18:29:46.147: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac found and phase=Failed (55.398053877s)
Sep 11 18:29:51.180: INFO: PersistentVolume pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac was removed
Sep 11 18:29:51.180: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5541 to be removed
Sep 11 18:29:51.212: INFO: Claim "azuredisk-5541" in namespace "pvc-7wq65" doesn't exist in the system
Sep 11 18:29:51.212: INFO: deleting StorageClass azuredisk-5541-kubernetes.io-azure-disk-dynamic-sc-wnqxz
Sep 11 18:29:51.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5541" for this suite.
... skipping 61 lines ...
Sep 11 18:31:37.592: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Bound (5.066261491s)
Sep 11 18:31:42.627: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Bound (10.101174231s)
Sep 11 18:31:47.660: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Bound (15.133792343s)
Sep 11 18:31:52.694: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Bound (20.168321395s)
Sep 11 18:31:57.729: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Bound (25.202763885s)
Sep 11 18:32:02.762: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Bound (30.235491701s)
Sep 11 18:32:07.796: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Failed (35.270206076s)
Sep 11 18:32:12.830: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Failed (40.303819976s)
Sep 11 18:32:17.866: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Failed (45.339568053s)
Sep 11 18:32:22.901: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Failed (50.374453182s)
Sep 11 18:32:27.933: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d found and phase=Failed (55.407040435s)
Sep 11 18:32:32.967: INFO: PersistentVolume pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d was removed
Sep 11 18:32:32.967: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5356 to be removed
Sep 11 18:32:32.999: INFO: Claim "azuredisk-5356" in namespace "pvc-mhpv8" doesn't exist in the system
Sep 11 18:32:32.999: INFO: deleting StorageClass azuredisk-5356-kubernetes.io-azure-disk-dynamic-sc-ms5cw
Sep 11 18:32:33.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "azuredisk-5356" for this suite.
... skipping 156 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep 11 18:32:52.269: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-z7gh7" in namespace "azuredisk-8510" to be "Succeeded or Failed"
Sep 11 18:32:52.323: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 54.062398ms
Sep 11 18:32:54.356: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086840784s
Sep 11 18:32:56.390: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120229615s
Sep 11 18:32:58.423: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154103707s
Sep 11 18:33:00.456: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.186793697s
Sep 11 18:33:02.489: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.219956473s
... skipping 9 lines ...
Sep 11 18:33:22.828: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.558646439s
Sep 11 18:33:24.862: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.593020596s
Sep 11 18:33:26.896: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.626995174s
Sep 11 18:33:28.929: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.659647824s
Sep 11 18:33:30.963: INFO: Pod "azuredisk-volume-tester-z7gh7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.693298232s
STEP: Saw pod success
Sep 11 18:33:30.963: INFO: Pod "azuredisk-volume-tester-z7gh7" satisfied condition "Succeeded or Failed"
Sep 11 18:33:30.963: INFO: deleting Pod "azuredisk-8510"/"azuredisk-volume-tester-z7gh7"
Sep 11 18:33:31.005: INFO: Pod azuredisk-volume-tester-z7gh7 has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-z7gh7 in namespace azuredisk-8510
STEP: validating provisioned PV
STEP: checking the PV
Sep 11 18:33:31.128: INFO: deleting PVC "azuredisk-8510"/"pvc-2w9c7"
Sep 11 18:33:31.128: INFO: Deleting PersistentVolumeClaim "pvc-2w9c7"
STEP: waiting for claim's PV "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" to be deleted
Sep 11 18:33:31.164: INFO: Waiting up to 10m0s for PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 to get deleted
Sep 11 18:33:31.201: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Released (37.252467ms)
Sep 11 18:33:36.235: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (5.07120295s)
Sep 11 18:33:41.270: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (10.105572007s)
Sep 11 18:33:46.305: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (15.141199464s)
Sep 11 18:33:51.339: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (20.175453648s)
Sep 11 18:33:56.373: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (25.209127652s)
Sep 11 18:34:01.408: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (30.243689887s)
Sep 11 18:34:06.442: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (35.278506329s)
Sep 11 18:34:11.478: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (40.314056097s)
Sep 11 18:34:16.512: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (45.347853584s)
Sep 11 18:34:21.550: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (50.386463193s)
Sep 11 18:34:26.586: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (55.422474826s)
Sep 11 18:34:31.624: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 found and phase=Failed (1m0.459671501s)
Sep 11 18:34:36.656: INFO: PersistentVolume pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 was removed
Sep 11 18:34:36.656: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-8510 to be removed
Sep 11 18:34:36.688: INFO: Claim "azuredisk-8510" in namespace "pvc-2w9c7" doesn't exist in the system
Sep 11 18:34:36.688: INFO: deleting StorageClass azuredisk-8510-kubernetes.io-azure-disk-dynamic-sc-jgkn9
STEP: validating provisioned PV
STEP: checking the PV
... skipping 50 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep 11 18:34:58.374: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pcw6q" in namespace "azuredisk-5561" to be "Succeeded or Failed"
Sep 11 18:34:58.407: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 31.993765ms
Sep 11 18:35:00.440: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065168908s
Sep 11 18:35:02.475: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100326068s
Sep 11 18:35:04.509: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.134434282s
Sep 11 18:35:06.543: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.168476673s
Sep 11 18:35:08.577: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 10.202145112s
... skipping 9 lines ...
Sep 11 18:35:28.914: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 30.539602515s
Sep 11 18:35:30.949: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 32.574216051s
Sep 11 18:35:32.983: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 34.608224476s
Sep 11 18:35:35.016: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Pending", Reason="", readiness=false. Elapsed: 36.641717169s
Sep 11 18:35:37.050: INFO: Pod "azuredisk-volume-tester-pcw6q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.675803642s
STEP: Saw pod success
Sep 11 18:35:37.050: INFO: Pod "azuredisk-volume-tester-pcw6q" satisfied condition "Succeeded or Failed"
Sep 11 18:35:37.050: INFO: deleting Pod "azuredisk-5561"/"azuredisk-volume-tester-pcw6q"
Sep 11 18:35:37.097: INFO: Pod azuredisk-volume-tester-pcw6q has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.066485 seconds, 1.5GB/s
hello world

... skipping 2 lines ...
STEP: checking the PV
Sep 11 18:35:37.202: INFO: deleting PVC "azuredisk-5561"/"pvc-wz6tl"
Sep 11 18:35:37.202: INFO: Deleting PersistentVolumeClaim "pvc-wz6tl"
STEP: waiting for claim's PV "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" to be deleted
Sep 11 18:35:37.237: INFO: Waiting up to 10m0s for PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 to get deleted
Sep 11 18:35:37.268: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Released (31.707319ms)
Sep 11 18:35:42.304: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (5.067425084s)
Sep 11 18:35:47.338: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (10.10116514s)
Sep 11 18:35:52.373: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (15.136815194s)
Sep 11 18:35:57.406: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (20.169784751s)
Sep 11 18:36:02.441: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (25.204709205s)
Sep 11 18:36:07.476: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (30.239822785s)
Sep 11 18:36:12.513: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 found and phase=Failed (35.276233773s)
Sep 11 18:36:17.548: INFO: PersistentVolume pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 was removed
Sep 11 18:36:17.548: INFO: Waiting up to 5m0s for PersistentVolumeClaim azuredisk-5561 to be removed
Sep 11 18:36:17.580: INFO: Claim "azuredisk-5561" in namespace "pvc-wz6tl" doesn't exist in the system
Sep 11 18:36:17.580: INFO: deleting StorageClass azuredisk-5561-kubernetes.io-azure-disk-dynamic-sc-rph7n
STEP: validating provisioned PV
STEP: checking the PV
... skipping 94 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep 11 18:36:30.755: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" to be "Succeeded or Failed"
Sep 11 18:36:30.789: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 33.71967ms
Sep 11 18:36:32.823: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067446743s
Sep 11 18:36:34.858: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102989168s
Sep 11 18:36:36.892: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136749895s
Sep 11 18:36:38.926: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170137777s
Sep 11 18:36:40.960: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 10.204237838s
... skipping 118 lines ...
Sep 11 18:40:43.044: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.28885055s
Sep 11 18:40:45.079: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4m14.323681595s
Sep 11 18:40:47.113: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4m16.357333677s
Sep 11 18:40:49.147: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4m18.391709198s
Sep 11 18:40:51.181: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.425676886s
Sep 11 18:40:53.215: INFO: Pod "azuredisk-volume-tester-hr88k": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.459850223s
Sep 11 18:40:55.249: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:40:57.281: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:40:59.313: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:01.346: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:03.379: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:05.413: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:07.445: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:09.479: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:11.512: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:13.545: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:15.579: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:17.612: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:19.650: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:21.684: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:23.717: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:25.750: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:27.785: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:29.818: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:31.851: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:33.884: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:35.917: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:37.950: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:39.983: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:42.016: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:44.050: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:46.084: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:48.116: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:50.150: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:52.182: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:54.216: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:56.250: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:41:58.283: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:00.317: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:02.350: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:04.384: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:06.417: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:08.482: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:10.514: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:12.547: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:14.580: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:16.616: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:18.650: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:20.683: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:22.717: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:24.750: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:26.791: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:28.825: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:30.857: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:32.891: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:34.924: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:36.957: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:38.991: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:41.024: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:43.057: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:45.090: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:47.125: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:49.159: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:51.192: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:53.225: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:55.257: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:57.292: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:42:59.325: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:01.358: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:03.391: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:05.425: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:07.458: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:09.491: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:11.525: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:13.558: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:15.591: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:17.625: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:19.659: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:21.694: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:23.727: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:25.761: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:27.795: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:29.828: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:31.861: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:33.894: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:35.928: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:37.961: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:39.994: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:42.028: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:44.061: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:46.094: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:48.127: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:50.159: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:52.193: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:54.225: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:56.259: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:43:58.292: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:00.326: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:02.359: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:04.392: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:06.424: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:08.462: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:10.495: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:12.528: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:14.561: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:16.595: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:18.628: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:20.661: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:22.694: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:24.726: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:26.759: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:28.792: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:30.828: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:32.860: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:34.893: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:36.927: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:38.961: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:40.995: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:43.028: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:45.064: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:47.097: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:49.130: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:51.165: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:53.198: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:55.241: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:57.274: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:44:59.306: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:01.341: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:03.375: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:05.408: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:07.442: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:09.475: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:11.509: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:13.542: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:15.576: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:17.609: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:19.644: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:21.676: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:23.709: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:25.743: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:27.777: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:29.810: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:31.844: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:33.879: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:35.914: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:37.947: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:39.981: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:42.017: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:44.052: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:46.085: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:48.118: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:50.152: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:52.186: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:54.219: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:56.262: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:45:58.297: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:00.352: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:02.388: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:04.421: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:06.454: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:08.489: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:10.522: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:12.556: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:14.589: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:16.622: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:18.655: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:20.691: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:22.725: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:24.758: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:26.790: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:28.824: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:30.857: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:32.891: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:34.925: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:36.958: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:38.991: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:41.024: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:43.058: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:45.092: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:47.126: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:49.160: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:51.193: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:53.226: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:55.259: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:57.292: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:46:59.326: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:01.359: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:03.392: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:05.425: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:07.458: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:09.491: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:11.524: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:13.557: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:15.590: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:17.625: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:19.658: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:21.692: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:23.726: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:25.760: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:27.793: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:29.826: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:31.859: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:33.892: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:35.925: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:37.958: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:39.992: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:42.027: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:44.060: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:46.093: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:48.127: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:50.161: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:52.195: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:54.229: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:56.264: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:47:58.297: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:00.331: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:02.364: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:04.397: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:06.432: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:08.465: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:10.498: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:12.532: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:14.567: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:16.601: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:18.635: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:20.667: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:22.700: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:24.733: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:26.767: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:28.800: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:30.833: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:32.866: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:34.899: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:36.932: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:38.967: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:41.000: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:43.034: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:45.068: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:47.102: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:49.136: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:51.169: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:53.205: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:55.239: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:57.273: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:48:59.307: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:01.341: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:03.373: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:05.406: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:07.444: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:09.478: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:11.512: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:13.545: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:15.578: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:17.612: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:19.651: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:21.686: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:23.718: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:25.751: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:27.784: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:29.818: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:31.852: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:33.885: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:35.919: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:37.952: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:39.986: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:42.020: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:44.054: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:46.087: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:48.120: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:50.154: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:52.187: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:54.220: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:56.254: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:49:58.288: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:00.322: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:02.438: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:04.472: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:06.504: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:08.538: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:10.573: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:12.612: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:14.646: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:16.679: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:18.713: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:20.746: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:22.779: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:24.813: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:26.847: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:28.881: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:30.915: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:32.949: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:34.983: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:37.017: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:39.051: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:41.085: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:43.120: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:45.152: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:47.187: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:49.221: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:51.255: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:53.288: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:55.321: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:57.355: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:50:59.389: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:01.423: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:03.456: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:05.489: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:07.522: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:09.555: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:11.590: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:13.627: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:15.660: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:17.694: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:19.727: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:21.760: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:23.793: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:25.825: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:27.859: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:29.892: INFO: Pod "azuredisk-volume-tester-hr88k" in namespace "azuredisk-953" not found. Error: pods "azuredisk-volume-tester-hr88k" not found
Sep 11 18:51:31.893: INFO: deleting Pod "azuredisk-953"/"azuredisk-volume-tester-hr88k"
Sep 11 18:51:31.926: INFO: Error getting logs for pod azuredisk-volume-tester-hr88k: the server could not find the requested resource (get pods azuredisk-volume-tester-hr88k)
STEP: Deleting pod azuredisk-volume-tester-hr88k in namespace azuredisk-953
STEP: validating provisioned PV
STEP: checking the PV
Sep 11 18:51:32.025: INFO: deleting PVC "azuredisk-953"/"pvc-njvrf"
Sep 11 18:51:32.025: INFO: Deleting PersistentVolumeClaim "pvc-njvrf"
STEP: waiting for claim's PV "pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" to be deleted
... skipping 36 lines ...
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:36:33 +0000 UTC - event for azuredisk-volume-tester-hr88k: {default-scheduler } Scheduled: Successfully assigned azuredisk-953/azuredisk-volume-tester-hr88k to capz-4tyuov-md-0-sgwmt
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:36:33 +0000 UTC - event for pvc-gmvdp: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1 using kubernetes.io/azure-disk
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:36:33 +0000 UTC - event for pvc-lxnkg: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e using kubernetes.io/azure-disk
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:36:33 +0000 UTC - event for pvc-njvrf: {persistentvolume-controller } ProvisioningSucceeded: Successfully provisioned volume pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa using kubernetes.io/azure-disk
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:36:44 +0000 UTC - event for azuredisk-volume-tester-hr88k: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" 
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:36:54 +0000 UTC - event for azuredisk-volume-tester-hr88k: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e" 
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:38:15 +0000 UTC - event for azuredisk-volume-tester-hr88k: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" : Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="StorageFailure" Message="Error while creating storage object https://md-hdd-sfgppb4l5pp5.z32.blob.storage.azure.net/b0xt5b52fn5l/abcd  Target: '/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1'."
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:38:16 +0000 UTC - event for azuredisk-volume-tester-hr88k: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" 
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:38:36 +0000 UTC - event for azuredisk-volume-tester-hr88k: {kubelet capz-4tyuov-md-0-sgwmt} FailedMount: Unable to attach or mount volumes: unmounted volumes=[test-volume-1], unattached volumes=[test-volume-1 test-volume-2 test-volume-3 kube-api-access-654qg]: timed out waiting for the condition
Sep 11 18:52:02.835: INFO: At 2021-09-11 18:41:05 +0000 UTC - event for azuredisk-volume-tester-hr88k: {kubelet capz-4tyuov-md-0-sgwmt} FailedMount: MountVolume.MountDevice failed while expanding volume for volume "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" : mountVolume.NodeExpandVolume get PVC failed : persistentvolumeclaims "pvc-gmvdp" is forbidden: User "system:node:capz-4tyuov-md-0-sgwmt" cannot get resource "persistentvolumeclaims" in API group "" in the namespace "azuredisk-953": no relationship found between node 'capz-4tyuov-md-0-sgwmt' and this object
Sep 11 18:52:02.867: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Sep 11 18:52:02.867: INFO: 
Sep 11 18:52:02.901: INFO: 
Logging node info for node capz-4tyuov-control-plane-gzjnv
Sep 11 18:52:02.934: INFO: Node Info: &Node{ObjectMeta:{capz-4tyuov-control-plane-gzjnv    7b90a7d0-4fa8-4a2f-b8b8-409b5cba9b17 5776 0 2021-09-11 18:02:02 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D2s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:eastus2 failure-domain.beta.kubernetes.io/zone:eastus2-3 kubernetes.io/arch:amd64 kubernetes.io/hostname:capz-4tyuov-control-plane-gzjnv kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:Standard_D2s_v3 topology.kubernetes.io/region:eastus2 topology.kubernetes.io/zone:eastus2-3] map[cluster.x-k8s.io/cluster-name:capz-4tyuov cluster.x-k8s.io/cluster-namespace:default cluster.x-k8s.io/machine:capz-4tyuov-control-plane-pfhfd cluster.x-k8s.io/owner-kind:KubeadmControlPlane cluster.x-k8s.io/owner-name:capz-4tyuov-control-plane kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:10.0.0.4/16 projectcalico.org/IPv4VXLANTunnelAddr:192.168.65.129 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-09-11 18:02:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}}} {kubeadm Update v1 2021-09-11 18:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2021-09-11 18:02:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:taints":{}}}} {manager Update v1 2021-09-11 18:02:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:cluster.x-k8s.io/cluster-name":{},"f:cluster.x-k8s.io/cluster-namespace":{},"f:cluster.x-k8s.io/machine":{},"f:cluster.x-k8s.io/owner-kind":{},"f:cluster.x-k8s.io/owner-name":{}}}}} {kubelet Update v1 2021-09-11 18:03:12 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}} {calico-node Update v1 2021-09-11 18:04:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4VXLANTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}}]},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{133018140672 0} {<nil>}  BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8349151232 0} {<nil>} 8153468Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-azure-disk: {{4 0} {<nil>} 4 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{119716326407 0} {<nil>} 119716326407 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{8244293632 0} {<nil>} 8051068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-09-11 18:03:59 +0000 UTC,LastTransitionTime:2021-09-11 18:03:59 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2021-09-11 18:50:02 +0000 UTC,LastTransitionTime:2021-09-11 18:01:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2021-09-11 18:50:02 +0000 UTC,LastTransitionTime:2021-09-11 18:01:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2021-09-11 18:50:02 +0000 UTC,LastTransitionTime:2021-09-11 18:01:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-11 18:50:02 +0000 UTC,LastTransitionTime:2021-09-11 18:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:Hostname,Address:capz-4tyuov-control-plane-gzjnv,},NodeAddress{Type:InternalIP,Address:10.0.0.4,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:629ec150275a456297cb20c71d3377a1,SystemUUID:df784f2f-c5f3-ab47-8c25-d943ec088043,BootID:51a50ac8-97f8-4a71-9294-a2f7b261ecf5,KernelVersion:5.3.0-1034-azure,OSImage:Ubuntu 18.04.5 LTS,ContainerRuntimeVersion:containerd://1.3.4,KubeletVersion:v1.22.2-rc.0.32+b68064208b29e5,KubeProxyVersion:v1.22.2-rc.0.32+b68064208b29e5,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.22.2-rc.0.32_b68064208b29e5 k8s.gcr.io/kube-apiserver:v1.22.2-rc.0.32_b68064208b29e5 gcr.io/k8s-staging-ci-images/kube-apiserver:v1.22.2-rc.0.32_b68064208b29e5],SizeBytes:129669587,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.22.2-rc.0.32_b68064208b29e5 k8s.gcr.io/kube-controller-manager:v1.22.2-rc.0.32_b68064208b29e5 gcr.io/k8s-staging-ci-images/kube-controller-manager:v1.22.2-rc.0.32_b68064208b29e5],SizeBytes:123197763,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.22.2-rc.0.32_b68064208b29e5 k8s.gcr.io/kube-proxy-amd64:v1.22.2-rc.0.32_b68064208b29e5 gcr.io/k8s-staging-ci-images/kube-proxy:v1.22.2-rc.0.32_b68064208b29e5],SizeBytes:105438921,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:100947667,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d k8s.gcr.io/etcd:3.5.0-0],SizeBytes:99868722,},ContainerImage{Names:[docker.io/calico/node@sha256:7f9aa7e31fbcea7be64b153f8bcfd494de023679ec10d851a05667f0adb42650 docker.io/calico/node:v3.20.0],SizeBytes:60692708,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.22.2-rc.0.32_b68064208b29e5 k8s.gcr.io/kube-scheduler:v1.22.2-rc.0.32_b68064208b29e5 gcr.io/k8s-staging-ci-images/kube-scheduler:v1.22.2-rc.0.32_b68064208b29e5],SizeBytes:53893442,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e9071531a6aa14fe50d882a68f10ee710d5203dd4bb07ff7a87d29cdc5a1fd5b k8s.gcr.io/kube-apiserver:v1.18.8],SizeBytes:51087915,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:6f6bd5c06680713d1047f7e27794c7c7d11e6859de5787dd4ca17d204669e683 k8s.gcr.io/kube-proxy:v1.18.8],SizeBytes:49392804,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:8a2b2a8d3e586afdd223e096ab65db865d6dce680336f0b9f0d764b21abba06f k8s.gcr.io/kube-controller-manager:v1.18.8],SizeBytes:49134333,},ContainerImage{Names:[docker.io/calico/cni@sha256:9906e2cca8006e1fe9fc3f358a3a06da6253afdd6fad05d594e884e8298ffe1d docker.io/calico/cni:v3.20.0],SizeBytes:48391417,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:ec7c376c780a3dd02d7e5850a0ca3d09fc8df50ac3ceb37a2214d403585361a0 k8s.gcr.io/kube-scheduler:v1.18.8],SizeBytes:34079408,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:a850ce8daa84433a5641900693b0f8bc8e5177a4aa4cac8cf4b6cd8c24fd9886 docker.io/calico/kube-controllers:v3.20.0],SizeBytes:26118903,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890 k8s.gcr.io/coredns/coredns:v1.8.4],SizeBytes:13707249,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800 k8s.gcr.io/coredns:1.6.7],SizeBytes:13598515,},ContainerImage{Names:[docker.io/calico/pod2daemon-flexvol@sha256:c17e3e9871682bed00bfd33f8d6f00db1d1a126034a25bf5380355978e0c548d docker.io/calico/pod2daemon-flexvol:v3.20.0],SizeBytes:9328481,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07 k8s.gcr.io/pause:3.5],SizeBytes:301416,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Sep 11 18:52:02.935: INFO: 
... skipping 66 lines ...
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:40
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:43
    should create a pod with multiple volumes [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:694

    Unexpected error:
        <*errors.StatusError | 0xc00061f0e0>: {
            ErrStatus: {
                TypeMeta: {Kind: "", APIVersion: ""},
                ListMeta: {
                    SelfLink: "",
                    ResourceVersion: "",
... skipping 302 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:158
STEP: Creating a kubernetes client
Sep 11 18:55:19.351: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
I0911 18:55:19.528248   30799 azuredisk_driver.go:56] Using azure disk driver: kubernetes.io/azure-disk
... skipping 2 lines ...

S [SKIPPING] [0.245 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:67
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:158

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:264
------------------------------
... skipping 244 lines ...
I0911 18:01:59.240506       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1631383318\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1631383317\" (2021-09-11 17:01:57 +0000 UTC to 2022-09-11 17:01:57 +0000 UTC (now=2021-09-11 18:01:59.240475741 +0000 UTC))"
I0911 18:01:59.240957       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1631383319\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1631383318\" (2021-09-11 17:01:58 +0000 UTC to 2022-09-11 17:01:58 +0000 UTC (now=2021-09-11 18:01:59.240928741 +0000 UTC))"
I0911 18:01:59.240016       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0911 18:01:59.241194       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0911 18:01:59.241162       1 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0911 18:01:59.241660       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0911 18:02:02.647380       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0911 18:02:02.647647       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0911 18:02:05.583296       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-4tyuov-control-plane-gzjnv_8aa5ec72-5c1d-4c56-a12d-2e8e68ecb0ab became leader"
I0911 18:02:05.582814       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
W0911 18:02:05.625192       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0911 18:02:05.626300       1 azure_auth.go:232] Using AzurePublicCloud environment
I0911 18:02:05.626373       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0911 18:02:05.626529       1 azure_interfaceclient.go:62] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0911 18:02:05.629136       1 reflector.go:255] Listing and watching *v1.Secret from k8s.io/client-go/informers/factory.go:134
I0911 18:02:05.629362       1 shared_informer.go:240] Waiting for caches to sync for tokens
I0911 18:02:05.630955       1 reflector.go:219] Starting reflector *v1.ServiceAccount (17h19m49.665656665s) from k8s.io/client-go/informers/factory.go:134
I0911 18:02:05.631016       1 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/informers/factory.go:134
I0911 18:02:05.631416       1 reflector.go:219] Starting reflector *v1.Node (17h19m49.665656665s) from k8s.io/client-go/informers/factory.go:134
I0911 18:02:05.631493       1 reflector.go:255] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:134
W0911 18:02:05.691238       1 azure_config.go:52] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0911 18:02:05.691262       1 controllermanager.go:562] Starting "daemonset"
I0911 18:02:05.705847       1 controllermanager.go:577] Started "daemonset"
I0911 18:02:05.705921       1 controllermanager.go:562] Starting "attachdetach"
I0911 18:02:05.706175       1 daemon_controller.go:284] Starting daemon sets controller
I0911 18:02:05.706246       1 shared_informer.go:240] Waiting for caches to sync for daemon sets
I0911 18:02:05.729442       1 shared_informer.go:270] caches populated
... skipping 5 lines ...
I0911 18:02:05.739497       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0911 18:02:05.739534       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0911 18:02:05.739623       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0911 18:02:05.739667       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0911 18:02:05.739686       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0911 18:02:05.739797       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0911 18:02:05.739941       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0911 18:02:05.739963       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0911 18:02:05.740203       1 controllermanager.go:577] Started "attachdetach"
I0911 18:02:05.740224       1 controllermanager.go:562] Starting "endpoint"
I0911 18:02:05.740321       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-control-plane-gzjnv"
W0911 18:02:05.740402       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4tyuov-control-plane-gzjnv" does not exist
I0911 18:02:05.740485       1 attach_detach_controller.go:328] Starting attach detach controller
I0911 18:02:05.740536       1 shared_informer.go:240] Waiting for caches to sync for attach detach
I0911 18:02:05.792690       1 controllermanager.go:577] Started "endpoint"
I0911 18:02:05.793505       1 controllermanager.go:562] Starting "resourcequota"
I0911 18:02:05.794535       1 endpoints_controller.go:195] Starting endpoint controller
I0911 18:02:05.794696       1 shared_informer.go:240] Waiting for caches to sync for endpoint
... skipping 226 lines ...
I0911 18:02:09.658903       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0911 18:02:09.659022       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0911 18:02:09.659156       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0911 18:02:09.659237       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0911 18:02:09.659331       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0911 18:02:09.659416       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0911 18:02:09.659536       1 csi_plugin.go:256] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0911 18:02:09.659612       1 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0911 18:02:09.659722       1 controllermanager.go:577] Started "persistentvolume-binder"
I0911 18:02:09.659830       1 controllermanager.go:562] Starting "endpointslicemirroring"
I0911 18:02:09.662512       1 pv_controller_base.go:308] Starting persistent volume controller
I0911 18:02:09.664800       1 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0911 18:02:09.799868       1 controllermanager.go:577] Started "endpointslicemirroring"
... skipping 377 lines ...
I0911 18:02:12.675674       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:02:12.675779       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:02:12.675871       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:02:12.675980       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (9.790783ms)
I0911 18:02:12.761940       1 publisher.go:186] Finished syncing namespace "kube-system" (329.591394ms)
I0911 18:02:12.762236       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="1.16113639s"
I0911 18:02:12.762271       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0911 18:02:12.762303       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2021-09-11 18:02:12.762289271 +0000 UTC m=+15.499410932"
I0911 18:02:12.763136       1 deployment_util.go:808] Deployment "coredns" timed out (false) [last progress check: 2021-09-11 18:02:11 +0000 UTC - now: 2021-09-11 18:02:12.763130379 +0000 UTC m=+15.500252140]
I0911 18:02:12.773343       1 endpoints_controller.go:387] Finished syncing service "kube-system/kube-dns" endpoints. (18µs)
I0911 18:02:12.774651       1 replica_set.go:380] Pod coredns-78fcd69978-66gxs created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"coredns-78fcd69978-66gxs", GenerateName:"coredns-78fcd69978-", Namespace:"kube-system", SelfLink:"", UID:"ad913ff6-69e0-4c0b-8180-f8c631807a0c", ResourceVersion:"390", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63766980132, loc:(*time.Location)(0x7504dc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"78fcd69978"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"coredns-78fcd69978", UID:"990871db-9843-4b84-ae67-7b0eabf5dd9e", Controller:(*bool)(0xc001506ae7), BlockOwnerDeletion:(*bool)(0xc001506ae8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00230b1d0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00230b1e8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"config-volume", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc002356600), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-4tdgq", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002308c40), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"coredns", Image:"k8s.gcr.io/coredns/coredns:v1.8.4", Command:[]string(nil), Args:[]string{"-conf", "/etc/coredns/Corefile"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"dns", HostPort:0, ContainerPort:53, Protocol:"UDP", HostIP:""}, v1.ContainerPort{Name:"dns-tcp", HostPort:0, ContainerPort:53, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:0, ContainerPort:9153, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:178257920, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"170Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:73400320, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"70Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"config-volume", ReadOnly:true, MountPath:"/etc/coredns", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-4tdgq", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002356780), ReadinessProbe:(*v1.Probe)(0xc0023567c0), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc00232d920), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001506c30), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"coredns", DeprecatedServiceAccount:"coredns", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000b70150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001506ca0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001506cc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001506cc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001506ccc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0023459b0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0911 18:02:12.775203       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc04759a8fb0d7a68, ext:14727860809, loc:(*time.Location)(0x7504dc0)}}
I0911 18:02:12.775344       1 pvc_protection_controller.go:402] "Enqueuing PVCs for Pod" pod="kube-system/coredns-78fcd69978-66gxs" podUID=ad913ff6-69e0-4c0b-8180-f8c631807a0c
... skipping 235 lines ...
I0911 18:02:31.558198       1 disruption.go:427] updatePod called on pod "calico-node-qfp2g"
I0911 18:02:31.558694       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-qfp2g, PodDisruptionBudget controller will avoid syncing.
I0911 18:02:31.560020       1 disruption.go:430] No matching pdb for pod "calico-node-qfp2g"
I0911 18:02:31.559788       1 controller_utils.go:122] Update ready status of pods on node [capz-4tyuov-control-plane-gzjnv]
I0911 18:02:31.563291       1 disruption.go:391] update DB "calico-kube-controllers"
I0911 18:02:31.592216       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="1.974632644s"
I0911 18:02:31.592500       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0911 18:02:31.592578       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2021-09-11 18:02:31.592560622 +0000 UTC m=+34.329682383"
I0911 18:02:31.598875       1 daemon_controller.go:247] Updating daemon set calico-node
I0911 18:02:31.609524       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.576159655s)
I0911 18:02:31.610938       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (1.430612824s)
I0911 18:02:31.618243       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759adca33ed86, ext:33908296963, loc:(*time.Location)(0x7504dc0)}}
I0911 18:02:31.618868       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759ade4e31bf5, ext:34355985366, loc:(*time.Location)(0x7504dc0)}}
... skipping 27 lines ...
I0911 18:02:31.813047       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:02:31.813143       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:02:31.813259       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.259933ms)
I0911 18:02:31.813690       1 disruption.go:427] updatePod called on pod "calico-node-qfp2g"
I0911 18:02:31.825218       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-qfp2g, PodDisruptionBudget controller will avoid syncing.
I0911 18:02:31.825359       1 disruption.go:430] No matching pdb for pod "calico-node-qfp2g"
E0911 18:02:31.825694       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.825797       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.825952       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.847179       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.847195       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.847218       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.847723       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.847845       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.847972       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.848340       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.848349       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.848382       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.848722       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.848828       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.848920       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.849258       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.849356       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.849462       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.862543       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.862650       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.862769       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.863148       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.863243       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.863353       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.863666       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.863751       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.863849       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.874369       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.874488       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.874656       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.874968       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.875092       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.875185       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:31.879765       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:31.886602       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:31.887160       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:02:32.387180       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv), assuming it is managed by availability set: not a vmss instance
I0911 18:02:32.387516       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv), assuming it is managed by availability set: not a vmss instance
I0911 18:02:32.387682       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:02:32.387843       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:02:32.841713       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-ylbvyy" (50.601µs)
I0911 18:02:32.846738       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-mocq8m" (5.3µs)
... skipping 172 lines ...
I0911 18:02:50.632847       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759b2a5b00987, ext:53369415528, loc:(*time.Location)(0x7504dc0)}}
I0911 18:02:50.633056       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759b2a5bb9daa, ext:53370174247, loc:(*time.Location)(0x7504dc0)}}
I0911 18:02:50.633154       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:02:50.633380       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:02:50.633566       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:02:50.634318       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (29.584321ms)
E0911 18:02:50.635760       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
I0911 18:02:50.636210       1 disruption.go:427] updatePod called on pod "calico-node-qfp2g"
I0911 18:02:50.625367       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
W0911 18:02:50.644700       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.645642       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:02:50.645946       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-qfp2g, PodDisruptionBudget controller will avoid syncing.
I0911 18:02:50.646052       1 disruption.go:430] No matching pdb for pod "calico-node-qfp2g"
E0911 18:02:50.646418       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.646429       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.646446       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:02:50.710630       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
E0911 18:02:50.735511       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.736313       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.737124       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.738486       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.738642       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.738790       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.758881       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.760097       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.761080       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.763544       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.770566       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.771380       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.772689       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.773294       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.774051       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.777423       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.782426       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.782583       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.783073       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.783084       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.783118       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.790836       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.791455       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.792318       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.793516       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.793526       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.793542       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:50.799395       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:50.799450       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:50.799602       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:02:50.810202       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
I0911 18:02:50.912803       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
I0911 18:02:50.971669       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-mocq8m" (6.8µs)
I0911 18:02:51.010890       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=ippools
I0911 18:02:51.109930       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
I0911 18:02:51.213568       1 graph_builder.go:279] garbage controller monitor not yet synced: crd.projectcalico.org/v1, Resource=bgpconfigurations
... skipping 26 lines ...
I0911 18:02:59.748383       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:02:59.748847       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:02:59.749774       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (3.586914ms)
I0911 18:02:59.751736       1 disruption.go:427] updatePod called on pod "calico-node-qfp2g"
I0911 18:02:59.754575       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-qfp2g, PodDisruptionBudget controller will avoid syncing.
I0911 18:02:59.755068       1 disruption.go:430] No matching pdb for pod "calico-node-qfp2g"
E0911 18:02:59.773097       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.773193       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.773308       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.774168       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.778613       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.779327       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.780816       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.780867       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.780948       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.786256       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.786548       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.786726       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.846647       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.846665       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.846692       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.911989       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.912316       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.912574       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.914461       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.918424       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.918524       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.918833       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.918843       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.918858       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.975359       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.976495       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.977212       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.980313       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.983099       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.986422       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.986725       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.986734       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.986748       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:02:59.986964       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:02:59.986970       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:02:59.986983       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:02.462212       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv), assuming it is managed by availability set: not a vmss instance
I0911 18:03:02.462271       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv), assuming it is managed by availability set: not a vmss instance
I0911 18:03:02.462293       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:03:02.462313       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:03:04.817917       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="95.501µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:37926" resp=200
I0911 18:03:04.878043       1 daemon_controller.go:570] Pod calico-node-qfp2g updated.
... skipping 7 lines ...
I0911 18:03:04.888116       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:04.888659       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:04.891418       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (10.864646ms)
I0911 18:03:04.891873       1 disruption.go:427] updatePod called on pod "calico-node-qfp2g"
I0911 18:03:04.892006       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-qfp2g, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:04.892082       1 disruption.go:430] No matching pdb for pod "calico-node-qfp2g"
E0911 18:03:04.892423       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:04.892507       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:04.892611       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:04.919004       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:04.919046       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:04.919085       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:04.920806       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:04.920824       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:04.920948       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:04.950156       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:04.950204       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:04.951280       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:04.954777       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:04.954787       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:04.954804       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:04.982188       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:04.982207       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:04.982232       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:05.019797       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:05.019813       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:05.019837       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:05.020099       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:05.020107       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:05.020122       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:05.020531       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:05.020617       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:05.020711       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:05.020971       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:05.021053       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:05.021158       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:05.021443       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:05.021527       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:05.021621       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:05.021899       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:05.022018       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:05.022104       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:07.857990       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv), assuming it is managed by availability set: not a vmss instance
I0911 18:03:07.874469       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-control-plane-gzjnv), assuming it is managed by availability set: not a vmss instance
I0911 18:03:07.874649       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:03:07.874804       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:03:11.379606       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:03:11.411653       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 4 lines ...
I0911 18:03:11.973145       1 disruption.go:427] updatePod called on pod "calico-node-qfp2g"
I0911 18:03:11.976039       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-qfp2g, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:11.976239       1 disruption.go:430] No matching pdb for pod "calico-node-qfp2g"
I0911 18:03:11.976926       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759b634d8462c, ext:67623710733, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:11.984503       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759b7faad6263, ext:74721563204, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:11.985664       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
E0911 18:03:11.990345       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:11.990496       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:11.990891       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:11.991340       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:11.991484       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759b7faad6263, ext:74721563204, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:11.991666       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759b7fb1b91e2, ext:74728784323, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:11.991781       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:11.991979       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:11.992185       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:11.992425       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (24.423102ms)
E0911 18:03:11.992768       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:11.994425       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:11.994567       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:11.994954       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:11.995086       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:11.995288       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:11.997087       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.005321       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.005503       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.005819       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.005830       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.005848       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.006088       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.006095       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.006111       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.006464       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.006551       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.006679       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.007053       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.007137       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.007281       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.007694       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.007822       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.008343       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.022985       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.022997       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.023015       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.023276       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.023283       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.023296       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.029568       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.030249       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.030375       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:12.054297       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:03:12.058516       1 controller_utils.go:209] Added [] Taint to Node capz-4tyuov-control-plane-gzjnv
I0911 18:03:12.058633       1 controller.go:269] Triggering nodeSync
I0911 18:03:12.058642       1 controller.go:288] nodeSync has been triggered
I0911 18:03:12.067079       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0911 18:03:12.067192       1 controller.go:779] Finished updateLoadBalancerHosts
... skipping 6 lines ...
I0911 18:03:12.190307       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/coredns-78fcd69978-66gxs"
I0911 18:03:12.190500       1 timed_workers.go:113] Adding TimedWorkerQueue item kube-system/coredns-78fcd69978-66gxs at 2021-09-11 18:03:12.19038374 +0000 UTC m=+74.927505501 to be fired at 2021-09-11 18:08:12.19038374 +0000 UTC m=+374.927505501
I0911 18:03:12.190887       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-66gxs"
I0911 18:03:12.191049       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-66gxs, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:12.191182       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-66gxs"
I0911 18:03:12.191334       1 controller_utils.go:122] Update ready status of pods on node [capz-4tyuov-control-plane-gzjnv]
E0911 18:03:12.191641       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.191748       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.191852       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.192203       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.192282       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.192411       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:12.219637       1 replica_set.go:443] Pod coredns-78fcd69978-gcdhj updated, objectMeta {Name:coredns-78fcd69978-gcdhj GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:08abd1ba-195e-489e-84c9-31de62423054 ResourceVersion:409 Generation:0 CreationTimestamp:2021-09-11 18:02:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:990871db-9843-4b84-ae67-7b0eabf5dd9e Controller:0xc002384b87 BlockOwnerDeletion:0xc002384b88}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"990871db-9843-4b84-ae67-7b0eabf5dd9e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-78fcd69978-gcdhj GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:08abd1ba-195e-489e-84c9-31de62423054 ResourceVersion:610 Generation:0 CreationTimestamp:2021-09-11 18:02:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:990871db-9843-4b84-ae67-7b0eabf5dd9e Controller:0xc00280f50f BlockOwnerDeletion:0xc00280f530}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"990871db-9843-4b84-ae67-7b0eabf5dd9e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0911 18:03:12.220940       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc04759a8fb0d7a68, ext:14727860809, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:12.221881       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (997.716µs)
I0911 18:03:12.222701       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-gcdhj"
I0911 18:03:12.229246       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-gcdhj, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:12.230569       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-gcdhj"
... skipping 6 lines ...
I0911 18:03:12.231682       1 timed_workers.go:113] Adding TimedWorkerQueue item kube-system/calico-kube-controllers-846b5f484d-q7c5m at 2021-09-11 18:03:12.231675421 +0000 UTC m=+74.968797182 to be fired at 2021-09-11 18:08:12.231675421 +0000 UTC m=+374.968797182
I0911 18:03:12.228034       1 controller_utils.go:122] Update ready status of pods on node [capz-4tyuov-control-plane-gzjnv]
I0911 18:03:12.228281       1 replica_set.go:443] Pod calico-kube-controllers-846b5f484d-q7c5m updated, objectMeta {Name:calico-kube-controllers-846b5f484d-q7c5m GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:8c6ec913-f779-4973-af55-5356d7b589e2 ResourceVersion:531 Generation:0 CreationTimestamp:2021-09-11 18:02:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:784f8a9c-f5cb-474b-80a9-5adeee412f87 Controller:0xc000f6bb3e BlockOwnerDeletion:0xc000f6bb3f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:29 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"784f8a9c-f5cb-474b-80a9-5adeee412f87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:calico-kube-controllers-846b5f484d-q7c5m GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:8c6ec913-f779-4973-af55-5356d7b589e2 ResourceVersion:611 Generation:0 CreationTimestamp:2021-09-11 18:02:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:784f8a9c-f5cb-474b-80a9-5adeee412f87 Controller:0xc00280f79e BlockOwnerDeletion:0xc00280f79f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:29 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"784f8a9c-f5cb-474b-80a9-5adeee412f87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0911 18:03:12.234133       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-846b5f484d", timestamp:time.Time{wall:0xc04759ad6f7f8b01, ext:32534009470, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:12.234458       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-846b5f484d" (328.506µs)
I0911 18:03:12.228975       1 controller_utils.go:122] Update ready status of pods on node [capz-4tyuov-control-plane-gzjnv]
E0911 18:03:12.234944       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.235530       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.236514       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.238621       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.239711       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.240850       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:12.243314       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:12.247039       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:12.247158       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:12.856758       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-4tyuov-control-plane-gzjnv transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-09-11 18:02:48 +0000 UTC,LastTransitionTime:2021-09-11 18:01:52 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-11 18:03:11 +0000 UTC,LastTransitionTime:2021-09-11 18:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0911 18:03:12.863500       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-control-plane-gzjnv ReadyCondition updated. Updating timestamp.
I0911 18:03:12.959264       1 replica_set.go:443] Pod coredns-78fcd69978-66gxs updated, objectMeta {Name:coredns-78fcd69978-66gxs GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:ad913ff6-69e0-4c0b-8180-f8c631807a0c ResourceVersion:609 Generation:0 CreationTimestamp:2021-09-11 18:02:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:990871db-9843-4b84-ae67-7b0eabf5dd9e Controller:0xc00288f407 BlockOwnerDeletion:0xc00288f408}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"990871db-9843-4b84-ae67-7b0eabf5dd9e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-78fcd69978-66gxs GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:ad913ff6-69e0-4c0b-8180-f8c631807a0c ResourceVersion:617 Generation:0 CreationTimestamp:2021-09-11 18:02:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:990871db-9843-4b84-ae67-7b0eabf5dd9e Controller:0xc0028567a7 BlockOwnerDeletion:0xc0028567a8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"990871db-9843-4b84-ae67-7b0eabf5dd9e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2021-09-11 18:03:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0911 18:03:12.959988       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc04759a8fb0d7a68, ext:14727860809, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:12.960488       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (503.909µs)
I0911 18:03:12.960615       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-66gxs"
I0911 18:03:12.971188       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-66gxs, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:12.971371       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-66gxs"
E0911 18:03:13.064918       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:13.068466       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:13.068960       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:13.112288       1 node_lifecycle_controller.go:893] Node capz-4tyuov-control-plane-gzjnv is healthy again, removing all taints
I0911 18:03:13.112489       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0911 18:03:13.153787       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-control-plane-gzjnv"
E0911 18:03:13.196457       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:13.196543       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:13.198446       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:13.203017       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-4tyuov-control-plane-gzjnv}
I0911 18:03:13.210365       1 taint_manager.go:440] "Updating known taints on node" node="capz-4tyuov-control-plane-gzjnv" taints=[]
I0911 18:03:13.210452       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-4tyuov-control-plane-gzjnv"
I0911 18:03:13.210484       1 timed_workers.go:132] Cancelling TimedWorkerQueue item kube-system/coredns-78fcd69978-66gxs at 2021-09-11 18:03:13.210479981 +0000 UTC m=+75.947601642
I0911 18:03:13.210598       1 timed_workers.go:132] Cancelling TimedWorkerQueue item kube-system/coredns-78fcd69978-gcdhj at 2021-09-11 18:03:13.210594383 +0000 UTC m=+75.947716144
I0911 18:03:13.210666       1 timed_workers.go:132] Cancelling TimedWorkerQueue item kube-system/calico-kube-controllers-846b5f484d-q7c5m at 2021-09-11 18:03:13.210663384 +0000 UTC m=+75.947785145
... skipping 5 lines ...
I0911 18:03:19.175416       1 replica_set.go:443] Pod coredns-78fcd69978-gcdhj updated, objectMeta {Name:coredns-78fcd69978-gcdhj GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:08abd1ba-195e-489e-84c9-31de62423054 ResourceVersion:610 Generation:0 CreationTimestamp:2021-09-11 18:02:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:990871db-9843-4b84-ae67-7b0eabf5dd9e Controller:0xc00280f50f BlockOwnerDeletion:0xc00280f530}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"990871db-9843-4b84-ae67-7b0eabf5dd9e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-78fcd69978-gcdhj GenerateName:coredns-78fcd69978- Namespace:kube-system SelfLink: UID:08abd1ba-195e-489e-84c9-31de62423054 ResourceVersion:625 Generation:0 CreationTimestamp:2021-09-11 18:02:12 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:78fcd69978] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-78fcd69978 UID:990871db-9843-4b84-ae67-7b0eabf5dd9e Controller:0xc000e8a1a7 BlockOwnerDeletion:0xc000e8a1a8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"990871db-9843-4b84-ae67-7b0eabf5dd9e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2021-09-11 18:03:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0911 18:03:19.177927       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-78fcd69978", timestamp:time.Time{wall:0xc04759a8fb0d7a68, ext:14727860809, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:19.178795       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/coredns-78fcd69978" (873.111µs)
I0911 18:03:19.180500       1 disruption.go:427] updatePod called on pod "coredns-78fcd69978-gcdhj"
I0911 18:03:19.181269       1 disruption.go:490] No PodDisruptionBudgets found for pod coredns-78fcd69978-gcdhj, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:19.181712       1 disruption.go:430] No matching pdb for pod "coredns-78fcd69978-gcdhj"
E0911 18:03:19.183013       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:19.183109       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:19.183254       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:19.185004       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:19.185090       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:19.185180       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:22.301398       1 replica_set.go:443] Pod calico-kube-controllers-846b5f484d-q7c5m updated, objectMeta {Name:calico-kube-controllers-846b5f484d-q7c5m GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:8c6ec913-f779-4973-af55-5356d7b589e2 ResourceVersion:611 Generation:0 CreationTimestamp:2021-09-11 18:02:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:784f8a9c-f5cb-474b-80a9-5adeee412f87 Controller:0xc00280f79e BlockOwnerDeletion:0xc00280f79f}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:29 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"784f8a9c-f5cb-474b-80a9-5adeee412f87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:calico-kube-controllers-846b5f484d-q7c5m GenerateName:calico-kube-controllers-846b5f484d- Namespace:kube-system SelfLink: UID:8c6ec913-f779-4973-af55-5356d7b589e2 ResourceVersion:628 Generation:0 CreationTimestamp:2021-09-11 18:02:31 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:846b5f484d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-846b5f484d UID:784f8a9c-f5cb-474b-80a9-5adeee412f87 Controller:0xc0015bb7d7 BlockOwnerDeletion:0xc0015bb7d8}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:02:29 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"784f8a9c-f5cb-474b-80a9-5adeee412f87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2021-09-11 18:02:31 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2021-09-11 18:03:21 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0911 18:03:22.302511       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-846b5f484d", timestamp:time.Time{wall:0xc04759ad6f7f8b01, ext:32534009470, loc:(*time.Location)(0x7504dc0)}}
E0911 18:03:22.303963       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:22.306935       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:22.307149       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:22.304056       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-846b5f484d-q7c5m"
I0911 18:03:22.311638       1 disruption.go:433] updatePod "calico-kube-controllers-846b5f484d-q7c5m" -> PDB "calico-kube-controllers"
I0911 18:03:22.314699       1 disruption.go:558] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (103.901µs)
I0911 18:03:22.315681       1 replica_set.go:653] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-846b5f484d" (13.18337ms)
I0911 18:03:22.831604       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-control-plane-gzjnv"
I0911 18:03:23.125381       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-control-plane-gzjnv ReadyCondition updated. Updating timestamp.
... skipping 61 lines ...
I0911 18:03:26.381443       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:03:26.683954       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:03:28.891686       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759ab8d53bc21, ext:24960713118, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:28.891926       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bc35295e3f, ext:91629025312, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:28.892079       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-4tyuov-md-0-sgwmt], creating 1
I0911 18:03:28.892677       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
W0911 18:03:28.892792       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4tyuov-md-0-sgwmt" does not exist
I0911 18:03:28.892906       1 controller.go:682] Ignoring node capz-4tyuov-md-0-sgwmt with Ready condition status False
I0911 18:03:28.893007       1 controller.go:269] Triggering nodeSync
I0911 18:03:28.893097       1 controller.go:288] nodeSync has been triggered
I0911 18:03:28.893197       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0911 18:03:28.893287       1 controller.go:779] Finished updateLoadBalancerHosts
I0911 18:03:28.893356       1 controller.go:720] It took 0.0001609 seconds to finish nodeSyncInternal
... skipping 36 lines ...
I0911 18:03:29.251053       1 daemon_controller.go:570] Pod kube-proxy-qvrvh updated.
I0911 18:03:29.252293       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/kube-proxy-qvrvh"
I0911 18:03:29.252983       1 disruption.go:427] updatePod called on pod "kube-proxy-qvrvh"
I0911 18:03:29.253541       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-qvrvh, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:29.253963       1 disruption.go:430] No matching pdb for pod "kube-proxy-qvrvh"
I0911 18:03:29.254623       1 daemon_controller.go:247] Updating daemon set kube-proxy
E0911 18:03:29.255851       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.258485       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.263030       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:29.262918       1 daemon_controller.go:570] Pod calico-node-9xrjr updated.
I0911 18:03:29.262953       1 disruption.go:427] updatePod called on pod "calico-node-9xrjr"
I0911 18:03:29.263493       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-9xrjr, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:29.263568       1 disruption.go:430] No matching pdb for pod "calico-node-9xrjr"
I0911 18:03:29.262991       1 taint_manager.go:400] "Noticed pod update" pod="kube-system/calico-node-9xrjr"
E0911 18:03:29.263779       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.263857       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.263963       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.264226       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.264235       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.264249       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.264437       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.264444       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.264457       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.264825       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.264899       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.265010       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.265247       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.265323       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.265415       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.266101       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.270155       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.270802       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.272304       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.275486       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.277577       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:29.277810       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (386.4864ms)
I0911 18:03:29.278328       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bc49eea6e4, ext:91903756897, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:29.279719       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bc50aba36e, ext:92016805611, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:29.279842       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:29.280120       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:29.280671       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bc50aba36e, ext:92016805611, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:29.283826       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bc50ea3e5f, ext:92020908508, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:29.286636       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:29.287088       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:29.287942       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:29.288556       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (10.589892ms)
E0911 18:03:29.288859       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.288983       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.289105       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.289437       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.294574       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.295460       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.298122       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.298934       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.299743       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.303921       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.306048       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.306200       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:29.306011       1 daemon_controller.go:247] Updating daemon set calico-node
E0911 18:03:29.306661       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.306748       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.306844       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.307120       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.307259       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.307365       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.307768       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.307777       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.307791       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.308005       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.308012       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.308026       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:29.318231       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (423.802671ms)
I0911 18:03:29.319317       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bc3568649b, ext:91633155808, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:29.319493       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bc530a79b3, ext:92056575380, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:29.319588       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:29.319753       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:29.319837       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bc530a79b3, ext:92056575380, loc:(*time.Location)(0x7504dc0)}}
... skipping 24 lines ...
I0911 18:03:29.733711       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:29.733727       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:29.733753       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (557.1µs)
I0911 18:03:29.734263       1 disruption.go:427] updatePod called on pod "kube-proxy-qvrvh"
I0911 18:03:29.734420       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-qvrvh, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:29.734509       1 disruption.go:430] No matching pdb for pod "kube-proxy-qvrvh"
E0911 18:03:29.742620       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.742711       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.742827       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.743135       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.743219       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.743324       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.747139       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.747255       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.747392       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:29.747739       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:29.747827       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:29.747926       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:31.646648       1 gc_controller.go:161] GC'ing orphaned
I0911 18:03:31.646712       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:03:31.867333       1 daemon_controller.go:570] Pod calico-node-9xrjr updated.
I0911 18:03:31.868656       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bc57305bae, ext:92126166827, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:31.868841       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bcf3c96585, ext:94605958502, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:31.868939       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
... skipping 5 lines ...
I0911 18:03:31.869705       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:31.869871       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (2.322598ms)
I0911 18:03:31.870155       1 disruption.go:427] updatePod called on pod "calico-node-9xrjr"
I0911 18:03:31.870288       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-9xrjr, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:31.870371       1 disruption.go:430] No matching pdb for pod "calico-node-9xrjr"
I0911 18:03:31.870595       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
E0911 18:03:31.871092       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.876723       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.881372       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.881759       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.881863       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.881991       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.882292       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.882301       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.882318       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:31.885833       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bcf3d18e94, ext:94606493201, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:31.891386       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bcf5210059, ext:94628476986, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:31.893047       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-4tyuov-md-0-pxbpw], creating 1
I0911 18:03:31.898147       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
W0911 18:03:31.898990       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4tyuov-md-0-pxbpw" does not exist
I0911 18:03:31.899089       1 controller.go:682] Ignoring node capz-4tyuov-md-0-sgwmt with Ready condition status False
I0911 18:03:31.899193       1 controller.go:682] Ignoring node capz-4tyuov-md-0-pxbpw with Ready condition status False
I0911 18:03:31.899271       1 controller.go:269] Triggering nodeSync
I0911 18:03:31.899348       1 controller.go:288] nodeSync has been triggered
I0911 18:03:31.899415       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0911 18:03:31.899486       1 controller.go:779] Finished updateLoadBalancerHosts
I0911 18:03:31.899541       1 controller.go:720] It took 0.0001279 seconds to finish nodeSyncInternal
I0911 18:03:31.900913       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-4tyuov-md-0-pxbpw}
I0911 18:03:31.901759       1 taint_manager.go:440] "Updating known taints on node" node="capz-4tyuov-md-0-pxbpw" taints=[]
I0911 18:03:31.924600       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bc6bbb1d39, ext:92470804762, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:31.930776       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bcf77a0e8d, ext:94667867858, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:31.935896       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-4tyuov-md-0-pxbpw], creating 1
E0911 18:03:31.956730       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.956748       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.956773       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.959968       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.960762       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.961270       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.963265       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.963965       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.964556       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.964949       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.965052       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.965249       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.965605       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.965896       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.966016       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.966382       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.966500       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.966597       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.966884       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.966970       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.967075       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.967392       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.967481       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.967623       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:31.968360       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:31.968455       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:31.968575       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:32.022316       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
I0911 18:03:32.034781       1 controller_utils.go:581] Controller calico-node created pod calico-node-hrdqp
I0911 18:03:32.034942       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:32.035045       1 controller_utils.go:195] Controller still waiting on expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bcf5210059, ext:94628476986, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.035191       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:32.035797       1 event.go:291] "Event occurred" object="kube-system/calico-node" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-node-hrdqp"
... skipping 36 lines ...
I0911 18:03:32.142485       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bd0878cd86, ext:94879256323, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.142614       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bd088013a5, ext:94879733126, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.142737       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:32.142865       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:32.142969       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:32.143471       1 daemon_controller.go:247] Updating daemon set kube-proxy
E0911 18:03:32.152499       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.152803       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.153237       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.155729       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.155794       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.155970       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:32.171947       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
I0911 18:03:32.174976       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (290.134935ms)
I0911 18:03:32.177843       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bcf5210059, ext:94628476986, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.186627       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd0b1f9ec4, ext:94923743397, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.186802       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:32.186988       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:32.187066       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd0b1f9ec4, ext:94923743397, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.187243       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd0b290cd7, ext:94924361400, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.187345       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:32.187461       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:32.187613       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:32.187998       1 daemon_controller.go:247] Updating daemon set calico-node
E0911 18:03:32.215072       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.215277       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.215404       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.215732       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.215820       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.216050       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.216506       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.217120       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.217260       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.217574       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.217663       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.217949       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.219709       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.221568       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.222194       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.223627       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.226597       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.228086       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.228489       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.229520       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.229710       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.230005       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.230085       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.230204       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.230534       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.230622       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.230728       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.231083       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.231159       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.231246       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.231523       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.233261       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.234170       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.236803       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.239285       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.239429       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.239803       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.239999       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.240126       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.240945       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.242495       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.242613       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:32.257915       1 daemon_controller.go:247] Updating daemon set calico-node
I0911 18:03:32.258173       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (83.061411ms)
I0911 18:03:32.267558       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (132.713516ms)
I0911 18:03:32.268000       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bd088013a5, ext:94879733126, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.268233       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bd0ffcd5c6, ext:95005349699, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.268329       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
... skipping 36 lines ...
I0911 18:03:32.591711       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:32.591803       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:32.592015       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (2.073813ms)
I0911 18:03:32.592164       1 disruption.go:427] updatePod called on pod "kube-proxy-bqghl"
I0911 18:03:32.592306       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-bqghl, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:32.592384       1 disruption.go:430] No matching pdb for pod "kube-proxy-bqghl"
E0911 18:03:32.606670       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.606683       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.606704       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.671497       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.671645       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.671820       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:32.705526       1 daemon_controller.go:570] Pod calico-node-hrdqp updated.
I0911 18:03:32.720611       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd106c3ea7, ext:95012651144, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.724319       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd2b2ba4df, ext:95461402304, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:32.725549       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:32.726815       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:32.727611       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd2b2ba4df, ext:95461402304, loc:(*time.Location)(0x7504dc0)}}
... skipping 2 lines ...
I0911 18:03:32.730657       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:32.730752       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:32.730921       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (23.372944ms)
I0911 18:03:32.731103       1 disruption.go:427] updatePod called on pod "calico-node-hrdqp"
I0911 18:03:32.731194       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-hrdqp, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:32.731279       1 disruption.go:430] No matching pdb for pod "calico-node-hrdqp"
E0911 18:03:32.735340       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.735424       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.735545       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.854900       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.855865       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.856811       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.859347       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.859447       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.859619       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.859970       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.860058       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.860170       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.860488       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.860573       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.860678       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.860954       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.861083       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.861225       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.861565       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.861666       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.861762       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.862061       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.862145       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.862265       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.862568       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.862656       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.862753       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.863013       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.863100       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.863196       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.863536       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.864168       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.864866       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.866528       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.867220       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.868097       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.868978       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.869102       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.869192       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:32.869653       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:32.869744       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:32.869847       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:32.898998       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
I0911 18:03:32.899167       1 azure_vmss_cache.go:340] Node capz-4tyuov-md-0-sgwmt has joined the cluster since the last VM cache refresh, refreshing the cache
I0911 18:03:33.126372       1 node_lifecycle_controller.go:770] Controller observed a new Node: "capz-4tyuov-md-0-sgwmt"
I0911 18:03:33.126419       1 controller_utils.go:172] Recording Registered Node capz-4tyuov-md-0-sgwmt in Controller event message for node capz-4tyuov-md-0-sgwmt
I0911 18:03:33.126437       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: eastus2::0
I0911 18:03:33.126452       1 node_lifecycle_controller.go:770] Controller observed a new Node: "capz-4tyuov-md-0-pxbpw"
... skipping 24 lines ...
I0911 18:03:38.065973       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:38.065999       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759be83eb98b1, ext:100802893458, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:38.066107       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759be83f0ad17, ext:100803226360, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:38.066174       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:38.066254       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:38.066295       1 daemon_controller.go:1102] Updating daemon set status
E0911 18:03:38.104337       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:38.104653       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:38.119522       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:38.119849       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:38.119888       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:38.119921       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:38.120294       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:38.120379       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:38.120606       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:38.122211       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:38.122830       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:38.123169       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:38.127310       1 node_lifecycle_controller.go:869] Node capz-4tyuov-md-0-pxbpw is NotReady as of 2021-09-11 18:03:38.127242684 +0000 UTC m=+100.864364345. Adding it to the Taint queue.
I0911 18:03:38.157649       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0911 18:03:38.162602       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (121.462665ms)
I0911 18:03:38.167402       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759be83f0ad17, ext:100803226360, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:38.167991       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759be8a0341e9, ext:100905107402, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:38.168037       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
... skipping 17 lines ...
I0911 18:03:40.220525       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:40.220717       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:40.220823       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:40.243115       1 disruption.go:427] updatePod called on pod "kube-proxy-qvrvh"
I0911 18:03:40.243178       1 disruption.go:490] No PodDisruptionBudgets found for pod kube-proxy-qvrvh, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:40.243184       1 disruption.go:430] No matching pdb for pod "kube-proxy-qvrvh"
E0911 18:03:40.250525       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:40.250536       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:40.250557       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:40.488255       1 daemon_controller.go:247] Updating daemon set kube-proxy
I0911 18:03:40.499967       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (281.348942ms)
I0911 18:03:40.500534       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bf0d239bd0, ext:102957559217, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:40.500704       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bf1dd813bc, ext:103237821753, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:40.501261       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:40.501576       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:40.518208       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bf1dd813bc, ext:103237821753, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:40.523105       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc04759bf1f2d6b07, ext:103260191976, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:40.531338       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [], creating 0
I0911 18:03:40.531928       1 daemon_controller.go:1029] Pods to delete for daemon set kube-proxy: [], deleting 0
I0911 18:03:40.532902       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:40.536672       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/kube-proxy" (36.675301ms)
E0911 18:03:40.659430       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:40.659619       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:40.659759       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:40.785253       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-ylbvyy" (12.1µs)
E0911 18:03:40.785678       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:40.785734       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:40.785816       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:40.786334       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:40.820104       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:40.820946       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:41.406626       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:03:41.413562       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:03:41.706260       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:03:41.783000       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-mocq8m" (35.701µs)
I0911 18:03:42.072162       1 daemon_controller.go:570] Pod calico-node-hrdqp updated.
I0911 18:03:42.095613       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bd2b899381, ext:95467558142, loc:(*time.Location)(0x7504dc0)}}
... skipping 3 lines ...
I0911 18:03:42.098459       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bf85b65a58, ext:104832958521, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:42.098646       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759bf85e128ff, ext:104835763936, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:42.098739       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:42.098838       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0911 18:03:42.099059       1 daemon_controller.go:1102] Updating daemon set status
I0911 18:03:42.099234       1 daemon_controller.go:1162] Finished syncing daemon set "kube-system/calico-node" (20.455813ms)
E0911 18:03:42.099443       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.102510       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.102967       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:42.099627       1 disruption.go:427] updatePod called on pod "calico-node-hrdqp"
I0911 18:03:42.106295       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-hrdqp, PodDisruptionBudget controller will avoid syncing.
I0911 18:03:42.106651       1 disruption.go:430] No matching pdb for pod "calico-node-hrdqp"
E0911 18:03:42.107316       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.107799       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.108035       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.108849       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.110443       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.110582       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.111021       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.111273       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.111367       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.111643       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.111716       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.111793       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.112056       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.112127       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.112202       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.112494       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.112951       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.113108       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:42.126972       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
E0911 18:03:42.153349       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.153365       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.153388       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.153855       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.153999       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.154138       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.154550       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.154630       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.154712       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.154978       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.155050       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.155130       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
E0911 18:03:42.158909       1 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
W0911 18:03:42.159045       1 driver-call.go:149] FlexVolume: driver call failed: executable: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: fork/exec /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds: no such file or directory, output: ""
E0911 18:03:42.159123       1 plugins.go:750] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
I0911 18:03:43.129297       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-sgwmt ReadyCondition updated. Updating timestamp.
I0911 18:03:43.129513       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-pxbpw ReadyCondition updated. Updating timestamp.
I0911 18:03:43.129570       1 node_lifecycle_controller.go:869] Node capz-4tyuov-md-0-pxbpw is NotReady as of 2021-09-11 18:03:43.12952737 +0000 UTC m=+105.866649131. Adding it to the Taint queue.
I0911 18:03:43.329318       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-4tyuov-md-0-pxbpw}
I0911 18:03:43.329616       1 taint_manager.go:440] "Updating known taints on node" node="capz-4tyuov-md-0-pxbpw" taints=[{Key:node.kubernetes.io/not-ready Value: Effect:NoExecute TimeAdded:2021-09-11 18:03:43 +0000 UTC}]
I0911 18:03:43.329743       1 taint_manager.go:361] "Current tolerations for pod tolerate forever, cancelling any scheduled deletion" pod="kube-system/calico-node-hrdqp"
... skipping 113 lines ...
I0911 18:03:57.411355       1 controller.go:779] Finished updateLoadBalancerHosts
I0911 18:03:57.411484       1 controller.go:737] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0911 18:03:57.411617       1 controller.go:720] It took 0.000537604 seconds to finish nodeSyncInternal
I0911 18:03:57.410703       1 controller_utils.go:209] Added [] Taint to Node capz-4tyuov-md-0-pxbpw
I0911 18:03:57.509852       1 controller_utils.go:221] Made sure that Node capz-4tyuov-md-0-pxbpw has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0911 18:03:57.510372       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
I0911 18:03:58.193894       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-4tyuov-md-0-pxbpw transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-09-11 18:03:42 +0000 UTC,LastTransitionTime:2021-09-11 18:03:30 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-11 18:03:57 +0000 UTC,LastTransitionTime:2021-09-11 18:03:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0911 18:03:58.206955       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-pxbpw ReadyCondition updated. Updating timestamp.
I0911 18:03:58.274828       1 daemon_controller.go:570] Pod calico-node-qfp2g updated.
I0911 18:03:58.276585       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759c336998833, ext:119653153300, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:58.276799       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc04759c3907f8dfc, ext:121013916537, loc:(*time.Location)(0x7504dc0)}}
I0911 18:03:58.276896       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0911 18:03:58.277018       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
... skipping 45 lines ...
I0911 18:04:01.119021       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0911 18:04:01.119174       1 controller.go:779] Finished updateLoadBalancerHosts
I0911 18:04:01.119354       1 controller.go:737] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0911 18:04:01.119461       1 controller.go:720] It took 0.000602085 seconds to finish nodeSyncInternal
I0911 18:04:01.150309       1 controller_utils.go:221] Made sure that Node capz-4tyuov-md-0-sgwmt has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}] Taint
I0911 18:04:01.154011       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
I0911 18:04:03.318520       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-4tyuov-md-0-sgwmt transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-09-11 18:03:39 +0000 UTC,LastTransitionTime:2021-09-11 18:03:28 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-11 18:04:01 +0000 UTC,LastTransitionTime:2021-09-11 18:04:01 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0911 18:04:03.318601       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-sgwmt ReadyCondition updated. Updating timestamp.
I0911 18:04:03.417412       1 node_lifecycle_controller.go:893] Node capz-4tyuov-md-0-sgwmt is healthy again, removing all taints
I0911 18:04:03.423208       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-4tyuov-md-0-sgwmt}
I0911 18:04:03.426228       1 taint_manager.go:440] "Updating known taints on node" node="capz-4tyuov-md-0-sgwmt" taints=[]
I0911 18:04:03.423503       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
I0911 18:04:03.432019       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-4tyuov-md-0-sgwmt"
... skipping 2386 lines ...
I0911 18:20:11.908019       1 pv_controller.go:1763] operation "delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]" is already running, skipping
I0911 18:20:11.908155       1 pv_controller.go:1231] deleteVolumeOperation [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7] started
I0911 18:20:11.910612       1 pv_controller.go:1340] isVolumeReleased[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is released
I0911 18:20:11.910627       1 pv_controller.go:1404] doDeleteVolume [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]
I0911 18:20:11.928485       1 gc_controller.go:161] GC'ing orphaned
I0911 18:20:11.928519       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:20:11.933026       1 pv_controller.go:1259] deletion of volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:20:11.933051       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: set phase Failed
I0911 18:20:11.933091       1 pv_controller.go:858] updating PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: set phase Failed
I0911 18:20:11.937920       1 pv_protection_controller.go:205] Got event on PV pvc-597412fa-4aa3-4792-8db7-e403acebc4c7
I0911 18:20:11.937946       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" with version 2329
I0911 18:20:11.937952       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" with version 2329
I0911 18:20:11.937968       1 pv_controller.go:879] volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" entered phase "Failed"
I0911 18:20:11.937978       1 pv_controller.go:901] volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:20:11.938031       1 goroutinemap.go:150] Operation for "delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]" failed. No retries permitted until 2021-09-11 18:20:12.43800988 +0000 UTC m=+1095.175131641 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:20:11.938272       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: phase: Failed, bound to: "azuredisk-8081/pvc-lpcxl (uid: 597412fa-4aa3-4792-8db7-e403acebc4c7)", boundByController: true
I0911 18:20:11.938290       1 event.go:291] "Event occurred" object="pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
I0911 18:20:11.938320       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is bound to claim azuredisk-8081/pvc-lpcxl
I0911 18:20:11.938340       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: claim azuredisk-8081/pvc-lpcxl not found
I0911 18:20:11.938349       1 pv_controller.go:1108] reclaimVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: policy is Delete
I0911 18:20:11.938361       1 pv_controller.go:1752] scheduleOperation[delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]]
I0911 18:20:11.938369       1 pv_controller.go:1765] operation "delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]" postponed due to exponential backoff
... skipping 13 lines ...
I0911 18:20:18.796311       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-pxbpw ReadyCondition updated. Updating timestamp.
I0911 18:20:21.296370       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0911 18:20:24.811213       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="66µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:47754" resp=200
I0911 18:20:26.448286       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:20:26.782604       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:20:26.782994       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" with version 2329
I0911 18:20:26.783231       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: phase: Failed, bound to: "azuredisk-8081/pvc-lpcxl (uid: 597412fa-4aa3-4792-8db7-e403acebc4c7)", boundByController: true
I0911 18:20:26.783422       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is bound to claim azuredisk-8081/pvc-lpcxl
I0911 18:20:26.783539       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: claim azuredisk-8081/pvc-lpcxl not found
I0911 18:20:26.783566       1 pv_controller.go:1108] reclaimVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: policy is Delete
I0911 18:20:26.783585       1 pv_controller.go:1752] scheduleOperation[delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]]
I0911 18:20:26.783797       1 pv_controller.go:1231] deleteVolumeOperation [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7] started
I0911 18:20:26.798125       1 pv_controller.go:1340] isVolumeReleased[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is released
I0911 18:20:26.798154       1 pv_controller.go:1404] doDeleteVolume [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]
I0911 18:20:26.798196       1 pv_controller.go:1259] deletion of volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7) since it's in attaching or detaching state
I0911 18:20:26.798214       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: set phase Failed
I0911 18:20:26.798277       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: phase Failed already set
E0911 18:20:26.798342       1 goroutinemap.go:150] Operation for "delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]" failed. No retries permitted until 2021-09-11 18:20:27.79831409 +0000 UTC m=+1110.535435851 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7) since it's in attaching or detaching state
I0911 18:20:31.421107       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 0 items received
I0911 18:20:31.479090       1 controller.go:269] Triggering nodeSync
I0911 18:20:31.479156       1 controller.go:288] nodeSync has been triggered
I0911 18:20:31.479168       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0911 18:20:31.479184       1 controller.go:779] Finished updateLoadBalancerHosts
I0911 18:20:31.479192       1 controller.go:720] It took 2.69e-05 seconds to finish nodeSyncInternal
... skipping 9 lines ...
I0911 18:20:38.949369       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7") on node "capz-4tyuov-md-0-pxbpw" 
I0911 18:20:41.354161       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ControllerRevision total 0 items received
I0911 18:20:41.449061       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:20:41.527356       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:20:41.783449       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:20:41.783532       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" with version 2329
I0911 18:20:41.783615       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: phase: Failed, bound to: "azuredisk-8081/pvc-lpcxl (uid: 597412fa-4aa3-4792-8db7-e403acebc4c7)", boundByController: true
I0911 18:20:41.783737       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is bound to claim azuredisk-8081/pvc-lpcxl
I0911 18:20:41.783763       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: claim azuredisk-8081/pvc-lpcxl not found
I0911 18:20:41.783836       1 pv_controller.go:1108] reclaimVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: policy is Delete
I0911 18:20:41.783983       1 pv_controller.go:1752] scheduleOperation[delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]]
I0911 18:20:41.784156       1 pv_controller.go:1231] deleteVolumeOperation [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7] started
I0911 18:20:41.791419       1 pv_controller.go:1340] isVolumeReleased[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is released
... skipping 3 lines ...
I0911 18:20:46.942927       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7
I0911 18:20:46.942965       1 pv_controller.go:1435] volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" deleted
I0911 18:20:46.942979       1 pv_controller.go:1283] deleteVolumeOperation [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: success
I0911 18:20:46.953235       1 pv_protection_controller.go:205] Got event on PV pvc-597412fa-4aa3-4792-8db7-e403acebc4c7
I0911 18:20:46.954107       1 pv_protection_controller.go:125] Processing PV pvc-597412fa-4aa3-4792-8db7-e403acebc4c7
I0911 18:20:46.954066       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" with version 2383
I0911 18:20:46.954476       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: phase: Failed, bound to: "azuredisk-8081/pvc-lpcxl (uid: 597412fa-4aa3-4792-8db7-e403acebc4c7)", boundByController: true
I0911 18:20:46.954511       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: volume is bound to claim azuredisk-8081/pvc-lpcxl
I0911 18:20:46.954592       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: claim azuredisk-8081/pvc-lpcxl not found
I0911 18:20:46.954616       1 pv_controller.go:1108] reclaimVolume[pvc-597412fa-4aa3-4792-8db7-e403acebc4c7]: policy is Delete
I0911 18:20:46.954651       1 pv_controller.go:1752] scheduleOperation[delete-pvc-597412fa-4aa3-4792-8db7-e403acebc4c7[62d00bcf-a026-4d0e-9284-56276350292d]]
I0911 18:20:46.955132       1 pv_controller.go:1231] deleteVolumeOperation [pvc-597412fa-4aa3-4792-8db7-e403acebc4c7] started
I0911 18:20:46.957703       1 pv_controller.go:1243] Volume "pvc-597412fa-4aa3-4792-8db7-e403acebc4c7" is already being deleted
... skipping 55 lines ...
I0911 18:20:52.408326       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-rqjmf.16a3d77a05c42ff4, uid c2fd970b-1c98-4dd9-9888-a3331b229cb3, event type delete
I0911 18:20:52.411246       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-rqjmf.16a3d77a0b4cf655, uid af07a083-b893-4aff-8856-10c56af76f8a, event type delete
I0911 18:20:52.423700       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name azuredisk-volume-tester-rqjmf.16a3d77a36a1a3b7, uid b1b81e11-10d6-4bf8-ac44-484d6b4ae59c, event type delete
I0911 18:20:52.429750       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name pvc-lpcxl.16a3d6afd55404f7, uid ddf1e941-33a1-48ff-80a1-bd9e511405ee, event type delete
I0911 18:20:52.437699       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8081, name pvc-lpcxl.16a3d6b069ac5be7, uid 4b17943d-8870-4168-8886-be94649806fa, event type delete
I0911 18:20:52.470284       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8081, name default-token-rcc4z, uid 82575bf0-af3a-45d3-9d24-df4831a71eeb, event type delete
E0911 18:20:52.485962       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8081/default: secrets "default-token-r2f8z" is forbidden: unable to create new content in namespace azuredisk-8081 because it is being terminated
I0911 18:20:52.525167       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8081, name kube-root-ca.crt, uid 078201eb-897e-4d8f-8193-f8ae0f7364c7, event type delete
I0911 18:20:52.529187       1 publisher.go:186] Finished syncing namespace "azuredisk-8081" (3.707416ms)
I0911 18:20:52.564303       1 azure_managedDiskController.go:208] azureDisk - created new MD Name:capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6 StorageAccountType:StandardSSD_LRS Size:10
I0911 18:20:52.581372       1 tokens_controller.go:252] syncServiceAccount(azuredisk-8081/default), service account deleted, removing tokens
I0911 18:20:52.581655       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-8081, name default, uid df6b15b1-479c-4c98-984f-1413359ddbc4, event type delete
I0911 18:20:52.581822       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-8081" (2.2µs)
... skipping 66 lines ...
I0911 18:20:53.310852       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-kcfv6"
I0911 18:20:53.326777       1 attacher.go:84] GetDiskLun returned: cannot find Lun for disk capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6. Initiating attaching volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6" to node "capz-4tyuov-md-0-pxbpw".
I0911 18:20:53.368705       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6" lun 0 to node "capz-4tyuov-md-0-pxbpw".
I0911 18:20:53.368752       1 azure_controller_standard.go:93] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - attach disk(capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) with DiskEncryptionSetID()
I0911 18:20:53.575350       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-1318
I0911 18:20:53.603291       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-1318, name default-token-4qfk9, uid 68fafccf-bc6d-49f7-af94-634a79284879, event type delete
E0911 18:20:53.617811       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-1318/default: secrets "default-token-j2kbn" is forbidden: unable to create new content in namespace azuredisk-1318 because it is being terminated
I0911 18:20:53.656713       1 tokens_controller.go:252] syncServiceAccount(azuredisk-1318/default), service account deleted, removing tokens
I0911 18:20:53.657104       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-1318, name default, uid e569e3e2-f4ac-4468-8ad2-e65fe8481443, event type delete
I0911 18:20:53.657226       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1318" (2.2µs)
I0911 18:20:53.705686       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-1318, name kube-root-ca.crt, uid 83fbf3b5-297f-40d3-b34c-b5f5853a8fc5, event type delete
I0911 18:20:53.709368       1 publisher.go:186] Finished syncing namespace "azuredisk-1318" (3.873517ms)
I0911 18:20:53.749815       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-1318" (1.9µs)
... skipping 375 lines ...
I0911 18:22:58.185011       1 pv_controller.go:1108] reclaimVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: policy is Delete
I0911 18:22:58.185020       1 pv_controller.go:1752] scheduleOperation[delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]]
I0911 18:22:58.185032       1 pv_controller.go:1763] operation "delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]" is already running, skipping
I0911 18:22:58.185048       1 pv_controller.go:1231] deleteVolumeOperation [pvc-916be236-f188-40bc-8105-a85ea39238d6] started
I0911 18:22:58.186892       1 pv_controller.go:1340] isVolumeReleased[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is released
I0911 18:22:58.186911       1 pv_controller.go:1404] doDeleteVolume [pvc-916be236-f188-40bc-8105-a85ea39238d6]
I0911 18:22:58.211925       1 pv_controller.go:1259] deletion of volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:22:58.211967       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-916be236-f188-40bc-8105-a85ea39238d6]: set phase Failed
I0911 18:22:58.211979       1 pv_controller.go:858] updating PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: set phase Failed
I0911 18:22:58.216468       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" with version 2648
I0911 18:22:58.216499       1 pv_controller.go:879] volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" entered phase "Failed"
I0911 18:22:58.216509       1 pv_controller.go:901] volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:22:58.216551       1 goroutinemap.go:150] Operation for "delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]" failed. No retries permitted until 2021-09-11 18:22:58.716530928 +0000 UTC m=+1261.453652689 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:22:58.216764       1 pv_protection_controller.go:205] Got event on PV pvc-916be236-f188-40bc-8105-a85ea39238d6
I0911 18:22:58.216791       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" with version 2648
I0911 18:22:58.216817       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: phase: Failed, bound to: "azuredisk-3274/pvc-267l6 (uid: 916be236-f188-40bc-8105-a85ea39238d6)", boundByController: true
I0911 18:22:58.216822       1 event.go:291] "Event occurred" object="pvc-916be236-f188-40bc-8105-a85ea39238d6" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
I0911 18:22:58.216866       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is bound to claim azuredisk-3274/pvc-267l6
I0911 18:22:58.216888       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: claim azuredisk-3274/pvc-267l6 not found
I0911 18:22:58.216897       1 pv_controller.go:1108] reclaimVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: policy is Delete
I0911 18:22:58.217034       1 pv_controller.go:1752] scheduleOperation[delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]]
I0911 18:22:58.217187       1 pv_controller.go:1765] operation "delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]" postponed due to exponential backoff
... skipping 10 lines ...
I0911 18:23:03.820340       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-pxbpw ReadyCondition updated. Updating timestamp.
I0911 18:23:04.811241       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="69.201µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49284" resp=200
I0911 18:23:11.457814       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:23:11.530685       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:23:11.790435       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:23:11.790511       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" with version 2648
I0911 18:23:11.790695       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: phase: Failed, bound to: "azuredisk-3274/pvc-267l6 (uid: 916be236-f188-40bc-8105-a85ea39238d6)", boundByController: true
I0911 18:23:11.790840       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is bound to claim azuredisk-3274/pvc-267l6
I0911 18:23:11.790868       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: claim azuredisk-3274/pvc-267l6 not found
I0911 18:23:11.790959       1 pv_controller.go:1108] reclaimVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: policy is Delete
I0911 18:23:11.790978       1 pv_controller.go:1752] scheduleOperation[delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]]
I0911 18:23:11.791032       1 pv_controller.go:1231] deleteVolumeOperation [pvc-916be236-f188-40bc-8105-a85ea39238d6] started
I0911 18:23:11.798475       1 pv_controller.go:1340] isVolumeReleased[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is released
I0911 18:23:11.798512       1 pv_controller.go:1404] doDeleteVolume [pvc-916be236-f188-40bc-8105-a85ea39238d6]
I0911 18:23:11.798767       1 pv_controller.go:1259] deletion of volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) since it's in attaching or detaching state
I0911 18:23:11.798795       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-916be236-f188-40bc-8105-a85ea39238d6]: set phase Failed
I0911 18:23:11.798824       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-916be236-f188-40bc-8105-a85ea39238d6]: phase Failed already set
E0911 18:23:11.798930       1 goroutinemap.go:150] Operation for "delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]" failed. No retries permitted until 2021-09-11 18:23:12.798906934 +0000 UTC m=+1275.536028695 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) since it's in attaching or detaching state
I0911 18:23:11.935375       1 gc_controller.go:161] GC'ing orphaned
I0911 18:23:11.935410       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:23:14.811446       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="62.6µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49390" resp=200
I0911 18:23:15.035743       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) returned with <nil>
I0911 18:23:15.035787       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6) succeeded
I0911 18:23:15.035798       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6 was detached from node:capz-4tyuov-md-0-pxbpw
I0911 18:23:15.035985       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6") on node "capz-4tyuov-md-0-pxbpw" 
I0911 18:23:21.408421       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0911 18:23:24.815506       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="80.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49488" resp=200
I0911 18:23:26.458067       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:23:26.791055       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:23:26.791128       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" with version 2648
I0911 18:23:26.791206       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: phase: Failed, bound to: "azuredisk-3274/pvc-267l6 (uid: 916be236-f188-40bc-8105-a85ea39238d6)", boundByController: true
I0911 18:23:26.791267       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is bound to claim azuredisk-3274/pvc-267l6
I0911 18:23:26.791295       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: claim azuredisk-3274/pvc-267l6 not found
I0911 18:23:26.791323       1 pv_controller.go:1108] reclaimVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: policy is Delete
I0911 18:23:26.791345       1 pv_controller.go:1752] scheduleOperation[delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]]
I0911 18:23:26.791383       1 pv_controller.go:1231] deleteVolumeOperation [pvc-916be236-f188-40bc-8105-a85ea39238d6] started
I0911 18:23:26.801072       1 pv_controller.go:1340] isVolumeReleased[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is released
... skipping 4 lines ...
I0911 18:23:32.100275       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-916be236-f188-40bc-8105-a85ea39238d6
I0911 18:23:32.100312       1 pv_controller.go:1435] volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" deleted
I0911 18:23:32.100326       1 pv_controller.go:1283] deleteVolumeOperation [pvc-916be236-f188-40bc-8105-a85ea39238d6]: success
I0911 18:23:32.111625       1 pv_protection_controller.go:205] Got event on PV pvc-916be236-f188-40bc-8105-a85ea39238d6
I0911 18:23:32.111657       1 pv_protection_controller.go:125] Processing PV pvc-916be236-f188-40bc-8105-a85ea39238d6
I0911 18:23:32.112038       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" with version 2699
I0911 18:23:32.112078       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: phase: Failed, bound to: "azuredisk-3274/pvc-267l6 (uid: 916be236-f188-40bc-8105-a85ea39238d6)", boundByController: true
I0911 18:23:32.112111       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: volume is bound to claim azuredisk-3274/pvc-267l6
I0911 18:23:32.112134       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: claim azuredisk-3274/pvc-267l6 not found
I0911 18:23:32.112143       1 pv_controller.go:1108] reclaimVolume[pvc-916be236-f188-40bc-8105-a85ea39238d6]: policy is Delete
I0911 18:23:32.112158       1 pv_controller.go:1752] scheduleOperation[delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]]
I0911 18:23:32.112167       1 pv_controller.go:1763] operation "delete-pvc-916be236-f188-40bc-8105-a85ea39238d6[3c5226ac-8edd-4aa6-805c-5f1f805191c8]" is already running, skipping
I0911 18:23:32.116802       1 pv_controller_base.go:235] volume "pvc-916be236-f188-40bc-8105-a85ea39238d6" deleted
... skipping 134 lines ...
I0911 18:23:38.778380       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3274, name azuredisk-volume-tester-kcfv6.16a3d7927349e024, uid 9af8451a-590a-43c2-b9e9-6881ace10324, event type delete
I0911 18:23:38.782447       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3274, name azuredisk-volume-tester-kcfv6.16a3d79a5426e344, uid 725efed6-ab56-428e-bfef-d1238ecc6187, event type delete
I0911 18:23:38.794066       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3274, name azuredisk-volume-tester-kcfv6.16a3d7a0f1fcd7a9, uid 2e238375-73f5-41b9-bff1-a71f707ccacc, event type delete
I0911 18:23:38.796854       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3274, name pvc-267l6.16a3d78397a3c709, uid d897c72c-9afd-4a77-a8b1-ecd99778a5a6, event type delete
I0911 18:23:38.803190       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3274, name pvc-267l6.16a3d78424f932bf, uid a083c267-30ee-4f35-bd07-3fa25034c0be, event type delete
I0911 18:23:38.819252       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3274, name default-token-6jbrw, uid 15cdf75b-f546-4f68-9301-bd14e1532875, event type delete
E0911 18:23:38.834675       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3274/default: secrets "default-token-84hl5" is forbidden: unable to create new content in namespace azuredisk-3274 because it is being terminated
I0911 18:23:38.834785       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3274/default), service account deleted, removing tokens
I0911 18:23:38.834863       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3274, name default, uid 00a9018d-ac9d-45c6-bc18-988371c3a27c, event type delete
I0911 18:23:38.834886       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3274" (2.1µs)
I0911 18:23:38.891591       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3274" (2µs)
I0911 18:23:38.891795       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3274, estimate: 0, errors: <nil>
I0911 18:23:38.908143       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3274" (252.564934ms)
... skipping 148 lines ...
I0911 18:23:59.229739       1 pv_controller.go:1108] reclaimVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: policy is Delete
I0911 18:23:59.229745       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]]
I0911 18:23:59.229749       1 pv_controller.go:1763] operation "delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]" is already running, skipping
I0911 18:23:59.229771       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c23469cf-15b1-48ad-afa2-bc64548890fa] started
I0911 18:23:59.231914       1 pv_controller.go:1340] isVolumeReleased[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is released
I0911 18:23:59.231928       1 pv_controller.go:1404] doDeleteVolume [pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]
I0911 18:23:59.271724       1 pv_controller.go:1259] deletion of volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:23:59.271794       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: set phase Failed
I0911 18:23:59.271820       1 pv_controller.go:858] updating PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: set phase Failed
I0911 18:23:59.276388       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" with version 2803
I0911 18:23:59.276419       1 pv_controller.go:879] volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" entered phase "Failed"
I0911 18:23:59.276430       1 pv_controller.go:901] volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
E0911 18:23:59.276487       1 goroutinemap.go:150] Operation for "delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]" failed. No retries permitted until 2021-09-11 18:23:59.77645111 +0000 UTC m=+1322.513572771 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:23:59.276849       1 event.go:291] "Event occurred" object="pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted"
I0911 18:23:59.276960       1 pv_protection_controller.go:205] Got event on PV pvc-c23469cf-15b1-48ad-afa2-bc64548890fa
I0911 18:23:59.276979       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" with version 2803
I0911 18:23:59.276994       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: phase: Failed, bound to: "azuredisk-495/pvc-kwh65 (uid: c23469cf-15b1-48ad-afa2-bc64548890fa)", boundByController: true
I0911 18:23:59.277009       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is bound to claim azuredisk-495/pvc-kwh65
I0911 18:23:59.277025       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: claim azuredisk-495/pvc-kwh65 not found
I0911 18:23:59.277031       1 pv_controller.go:1108] reclaimVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: policy is Delete
I0911 18:23:59.277058       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]]
I0911 18:23:59.277087       1 pv_controller.go:1765] operation "delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]" postponed due to exponential backoff
I0911 18:23:59.958254       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
... skipping 11 lines ...
I0911 18:24:06.404987       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ResourceQuota total 9 items received
I0911 18:24:08.521176       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CronJob total 6 items received
I0911 18:24:11.460653       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:24:11.532834       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:24:11.792432       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:24:11.792513       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" with version 2803
I0911 18:24:11.792616       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: phase: Failed, bound to: "azuredisk-495/pvc-kwh65 (uid: c23469cf-15b1-48ad-afa2-bc64548890fa)", boundByController: true
I0911 18:24:11.792788       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is bound to claim azuredisk-495/pvc-kwh65
I0911 18:24:11.792822       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: claim azuredisk-495/pvc-kwh65 not found
I0911 18:24:11.792926       1 pv_controller.go:1108] reclaimVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: policy is Delete
I0911 18:24:11.792952       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]]
I0911 18:24:11.793049       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c23469cf-15b1-48ad-afa2-bc64548890fa] started
I0911 18:24:11.800325       1 pv_controller.go:1340] isVolumeReleased[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is released
I0911 18:24:11.800345       1 pv_controller.go:1404] doDeleteVolume [pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]
I0911 18:24:11.800384       1 pv_controller.go:1259] deletion of volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) since it's in attaching or detaching state
I0911 18:24:11.800398       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: set phase Failed
I0911 18:24:11.800409       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: phase Failed already set
E0911 18:24:11.800442       1 goroutinemap.go:150] Operation for "delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]" failed. No retries permitted until 2021-09-11 18:24:12.800418153 +0000 UTC m=+1335.537539914 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) since it's in attaching or detaching state
I0911 18:24:11.936544       1 gc_controller.go:161] GC'ing orphaned
I0911 18:24:11.936576       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:24:14.129095       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 1 items received
I0911 18:24:14.811595       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="86.4µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:49972" resp=200
I0911 18:24:15.554099       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-sgwmt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) returned with <nil>
I0911 18:24:15.554235       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa) succeeded
... skipping 2 lines ...
I0911 18:24:17.874023       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 8 items received
I0911 18:24:21.448709       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0911 18:24:24.811967       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="90.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50070" resp=200
I0911 18:24:26.461486       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:24:26.793418       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:24:26.793532       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" with version 2803
I0911 18:24:26.793624       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: phase: Failed, bound to: "azuredisk-495/pvc-kwh65 (uid: c23469cf-15b1-48ad-afa2-bc64548890fa)", boundByController: true
I0911 18:24:26.793695       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is bound to claim azuredisk-495/pvc-kwh65
I0911 18:24:26.793718       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: claim azuredisk-495/pvc-kwh65 not found
I0911 18:24:26.793790       1 pv_controller.go:1108] reclaimVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: policy is Delete
I0911 18:24:26.793810       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]]
I0911 18:24:26.793878       1 pv_controller.go:1231] deleteVolumeOperation [pvc-c23469cf-15b1-48ad-afa2-bc64548890fa] started
I0911 18:24:26.804495       1 pv_controller.go:1340] isVolumeReleased[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is released
... skipping 4 lines ...
I0911 18:24:31.955784       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa
I0911 18:24:31.955811       1 pv_controller.go:1435] volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" deleted
I0911 18:24:31.955825       1 pv_controller.go:1283] deleteVolumeOperation [pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: success
I0911 18:24:31.964170       1 pv_protection_controller.go:205] Got event on PV pvc-c23469cf-15b1-48ad-afa2-bc64548890fa
I0911 18:24:31.964207       1 pv_protection_controller.go:125] Processing PV pvc-c23469cf-15b1-48ad-afa2-bc64548890fa
I0911 18:24:31.964681       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" with version 2852
I0911 18:24:31.964732       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: phase: Failed, bound to: "azuredisk-495/pvc-kwh65 (uid: c23469cf-15b1-48ad-afa2-bc64548890fa)", boundByController: true
I0911 18:24:31.964774       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: volume is bound to claim azuredisk-495/pvc-kwh65
I0911 18:24:31.964822       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: claim azuredisk-495/pvc-kwh65 not found
I0911 18:24:31.964836       1 pv_controller.go:1108] reclaimVolume[pvc-c23469cf-15b1-48ad-afa2-bc64548890fa]: policy is Delete
I0911 18:24:31.964855       1 pv_controller.go:1752] scheduleOperation[delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]]
I0911 18:24:31.964870       1 pv_controller.go:1763] operation "delete-pvc-c23469cf-15b1-48ad-afa2-bc64548890fa[fa2092e1-289a-4a09-be03-8bbec106aecc]" is already running, skipping
I0911 18:24:31.970599       1 pv_controller_base.go:235] volume "pvc-c23469cf-15b1-48ad-afa2-bc64548890fa" deleted
... skipping 113 lines ...
I0911 18:24:39.756034       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-495, name azuredisk-volume-tester-qz8ng.16a3d7af1ddb178b, uid 30860d7b-b9a0-4278-ab65-598e1c7fc0a0, event type delete
I0911 18:24:39.760918       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-495, name azuredisk-volume-tester-qz8ng.16a3d7af240838b0, uid e8fbe01e-8f13-4cb4-a286-2609c63d9230, event type delete
I0911 18:24:39.765423       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-495, name azuredisk-volume-tester-qz8ng.16a3d7af3c1f4410, uid 8c56e65c-f39f-4283-bd44-c58f74b72221, event type delete
I0911 18:24:39.769401       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-495, name pvc-kwh65.16a3d7a9d9f3ca13, uid 007356bd-2875-45c8-8584-7d77eea12f0d, event type delete
I0911 18:24:39.772780       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-495, name pvc-kwh65.16a3d7aa654ebcf1, uid fe72e1f7-1cd3-462a-b8c8-b020eae219f5, event type delete
I0911 18:24:39.827584       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-495, name default-token-7txcg, uid 27475171-e317-4c98-8838-5cb242c98ec9, event type delete
E0911 18:24:39.859172       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-495/default: secrets "default-token-ctgxf" is forbidden: unable to create new content in namespace azuredisk-495 because it is being terminated
I0911 18:24:39.869067       1 tokens_controller.go:252] syncServiceAccount(azuredisk-495/default), service account deleted, removing tokens
I0911 18:24:39.869135       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-495, name default, uid 9988767c-add1-4fb3-8a66-80d0fe4bd2fe, event type delete
I0911 18:24:39.869170       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-495" (2.7µs)
I0911 18:24:39.901154       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-495, name kube-root-ca.crt, uid 4e69b4c1-2d07-49d3-82a2-39abd117726f, event type delete
I0911 18:24:39.904556       1 publisher.go:186] Finished syncing namespace "azuredisk-495" (3.360921ms)
I0911 18:24:39.915661       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-495, estimate: 0, errors: <nil>
... skipping 155 lines ...
I0911 18:25:00.193372       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: claim azuredisk-9947/pvc-clnt4 not found
I0911 18:25:00.193451       1 pv_controller.go:1108] reclaimVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: policy is Delete
I0911 18:25:00.193540       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]]
I0911 18:25:00.193605       1 pv_controller.go:1763] operation "delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]" is already running, skipping
I0911 18:25:00.199533       1 pv_controller.go:1340] isVolumeReleased[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is released
I0911 18:25:00.199550       1 pv_controller.go:1404] doDeleteVolume [pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]
I0911 18:25:00.229484       1 pv_controller.go:1259] deletion of volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:25:00.229506       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: set phase Failed
I0911 18:25:00.229515       1 pv_controller.go:858] updating PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: set phase Failed
I0911 18:25:00.232847       1 pv_protection_controller.go:205] Got event on PV pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd
I0911 18:25:00.232878       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" with version 2953
I0911 18:25:00.232906       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: phase: Failed, bound to: "azuredisk-9947/pvc-clnt4 (uid: f1cc0357-6254-4a7c-a69e-24101a0bd7dd)", boundByController: true
I0911 18:25:00.232931       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is bound to claim azuredisk-9947/pvc-clnt4
I0911 18:25:00.232949       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: claim azuredisk-9947/pvc-clnt4 not found
I0911 18:25:00.232958       1 pv_controller.go:1108] reclaimVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: policy is Delete
I0911 18:25:00.232969       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]]
I0911 18:25:00.232976       1 pv_controller.go:1763] operation "delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]" is already running, skipping
I0911 18:25:00.233973       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" with version 2953
I0911 18:25:00.234004       1 pv_controller.go:879] volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" entered phase "Failed"
I0911 18:25:00.234051       1 pv_controller.go:901] volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
E0911 18:25:00.234161       1 goroutinemap.go:150] Operation for "delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]" failed. No retries permitted until 2021-09-11 18:25:00.734139664 +0000 UTC m=+1383.471261325 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:25:00.234557       1 event.go:291] "Event occurred" object="pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted"
I0911 18:25:00.394139       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
I0911 18:25:00.394182       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd to the node "capz-4tyuov-md-0-sgwmt" mounted false
I0911 18:25:00.494712       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-4tyuov-md-0-sgwmt" succeeded. VolumesAttached: []
I0911 18:25:00.494783       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd") on node "capz-4tyuov-md-0-sgwmt" 
I0911 18:25:00.496538       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
... skipping 8 lines ...
I0911 18:25:08.616607       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.LimitRange total 2 items received
I0911 18:25:09.983532       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 3 items received
I0911 18:25:11.463836       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:25:11.534487       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:25:11.795176       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:25:11.795248       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" with version 2953
I0911 18:25:11.795298       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: phase: Failed, bound to: "azuredisk-9947/pvc-clnt4 (uid: f1cc0357-6254-4a7c-a69e-24101a0bd7dd)", boundByController: true
I0911 18:25:11.795334       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is bound to claim azuredisk-9947/pvc-clnt4
I0911 18:25:11.795352       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: claim azuredisk-9947/pvc-clnt4 not found
I0911 18:25:11.795360       1 pv_controller.go:1108] reclaimVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: policy is Delete
I0911 18:25:11.795375       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]]
I0911 18:25:11.795407       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd] started
I0911 18:25:11.807038       1 pv_controller.go:1340] isVolumeReleased[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is released
I0911 18:25:11.807060       1 pv_controller.go:1404] doDeleteVolume [pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]
I0911 18:25:11.807099       1 pv_controller.go:1259] deletion of volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) since it's in attaching or detaching state
I0911 18:25:11.807118       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: set phase Failed
I0911 18:25:11.807128       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: phase Failed already set
E0911 18:25:11.807158       1 goroutinemap.go:150] Operation for "delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]" failed. No retries permitted until 2021-09-11 18:25:12.807136518 +0000 UTC m=+1395.544258179 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) since it's in attaching or detaching state
I0911 18:25:11.938114       1 gc_controller.go:161] GC'ing orphaned
I0911 18:25:11.938152       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:25:12.260629       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0911 18:25:14.811358       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="56.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50548" resp=200
I0911 18:25:16.080709       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-sgwmt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) returned with <nil>
I0911 18:25:16.080746       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd) succeeded
... skipping 4 lines ...
I0911 18:25:22.408527       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PodTemplate total 9 items received
I0911 18:25:22.426904       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 8 items received
I0911 18:25:24.810830       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="95.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:50644" resp=200
I0911 18:25:26.464435       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:25:26.795333       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:25:26.795401       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" with version 2953
I0911 18:25:26.795440       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: phase: Failed, bound to: "azuredisk-9947/pvc-clnt4 (uid: f1cc0357-6254-4a7c-a69e-24101a0bd7dd)", boundByController: true
I0911 18:25:26.795495       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is bound to claim azuredisk-9947/pvc-clnt4
I0911 18:25:26.795511       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: claim azuredisk-9947/pvc-clnt4 not found
I0911 18:25:26.795519       1 pv_controller.go:1108] reclaimVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: policy is Delete
I0911 18:25:26.795533       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]]
I0911 18:25:26.795562       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd] started
I0911 18:25:26.804275       1 pv_controller.go:1340] isVolumeReleased[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is released
... skipping 8 lines ...
I0911 18:25:31.971600       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd
I0911 18:25:31.971628       1 pv_controller.go:1435] volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" deleted
I0911 18:25:31.971781       1 pv_controller.go:1283] deleteVolumeOperation [pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: success
I0911 18:25:31.979253       1 pv_protection_controller.go:205] Got event on PV pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd
I0911 18:25:31.980260       1 pv_protection_controller.go:125] Processing PV pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd
I0911 18:25:31.979905       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" with version 3001
I0911 18:25:31.980566       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: phase: Failed, bound to: "azuredisk-9947/pvc-clnt4 (uid: f1cc0357-6254-4a7c-a69e-24101a0bd7dd)", boundByController: true
I0911 18:25:31.980632       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: volume is bound to claim azuredisk-9947/pvc-clnt4
I0911 18:25:31.980657       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: claim azuredisk-9947/pvc-clnt4 not found
I0911 18:25:31.980669       1 pv_controller.go:1108] reclaimVolume[pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd]: policy is Delete
I0911 18:25:31.980748       1 pv_controller.go:1752] scheduleOperation[delete-pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd[a856e2a7-8958-469f-a37c-02d99229482c]]
I0911 18:25:31.980870       1 pv_controller.go:1231] deleteVolumeOperation [pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd] started
I0911 18:25:31.985904       1 pv_controller.go:1243] Volume "pvc-f1cc0357-6254-4a7c-a69e-24101a0bd7dd" is already being deleted
... skipping 114 lines ...
I0911 18:25:40.700755       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9947, name azuredisk-volume-tester-gfnbv.16a3d7bd557dddc9, uid 321df568-1066-426a-828b-b087425c0057, event type delete
I0911 18:25:40.705524       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9947, name azuredisk-volume-tester-gfnbv.16a3d7bd5c519aba, uid 205b006d-1a33-4306-9a3b-6e042c2ae061, event type delete
I0911 18:25:40.709768       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9947, name azuredisk-volume-tester-gfnbv.16a3d7bd6fc205f0, uid bed41b37-9599-41e8-a297-333b091c8069, event type delete
I0911 18:25:40.713608       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9947, name pvc-clnt4.16a3d7b809528532, uid 1a5499ed-bbd2-43c3-bdf0-8e7a3d393b4e, event type delete
I0911 18:25:40.723852       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-9947, name pvc-clnt4.16a3d7b8975bbf42, uid 2e975473-7cd0-4c2a-a92e-01a93d574bcc, event type delete
I0911 18:25:40.795181       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-9947, name default-token-7gzhn, uid 0804ed4a-7f8a-44cd-bd99-825f752edb8d, event type delete
E0911 18:25:40.820481       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-9947/default: secrets "default-token-6d4w4" is forbidden: unable to create new content in namespace azuredisk-9947 because it is being terminated
I0911 18:25:40.845717       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-9947, name default, uid 6c7be5d9-be51-4e06-9563-c2c498da8416, event type delete
I0911 18:25:40.847266       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-9947" (3.1µs)
I0911 18:25:40.847485       1 tokens_controller.go:252] syncServiceAccount(azuredisk-9947/default), service account deleted, removing tokens
I0911 18:25:40.878927       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-9947, name kube-root-ca.crt, uid 0c6d7513-736a-4ba2-af4b-91e31643d350, event type delete
I0911 18:25:40.881399       1 publisher.go:186] Finished syncing namespace "azuredisk-9947" (2.425515ms)
I0911 18:25:40.902518       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-9947, estimate: 0, errors: <nil>
... skipping 716 lines ...
I0911 18:27:00.892817       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: claim azuredisk-5541/pvc-xrh5p not found
I0911 18:27:00.892827       1 pv_controller.go:1108] reclaimVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: policy is Delete
I0911 18:27:00.892844       1 pv_controller.go:1752] scheduleOperation[delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]]
I0911 18:27:00.892850       1 pv_controller.go:1763] operation "delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]" is already running, skipping
I0911 18:27:00.896079       1 pv_controller.go:1340] isVolumeReleased[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: volume is released
I0911 18:27:00.896190       1 pv_controller.go:1404] doDeleteVolume [pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]
I0911 18:27:00.916977       1 pv_controller.go:1259] deletion of volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:27:00.916995       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: set phase Failed
I0911 18:27:00.917057       1 pv_controller.go:858] updating PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: set phase Failed
I0911 18:27:00.920109       1 pv_protection_controller.go:205] Got event on PV pvc-03d622fc-f747-4a7b-9193-5d2251c785bc
I0911 18:27:00.920145       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" with version 3231
I0911 18:27:00.920205       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: phase: Failed, bound to: "azuredisk-5541/pvc-xrh5p (uid: 03d622fc-f747-4a7b-9193-5d2251c785bc)", boundByController: true
I0911 18:27:00.920254       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: volume is bound to claim azuredisk-5541/pvc-xrh5p
I0911 18:27:00.920294       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: claim azuredisk-5541/pvc-xrh5p not found
I0911 18:27:00.920302       1 pv_controller.go:1108] reclaimVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: policy is Delete
I0911 18:27:00.920312       1 pv_controller.go:1752] scheduleOperation[delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]]
I0911 18:27:00.920320       1 pv_controller.go:1763] operation "delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]" is already running, skipping
I0911 18:27:00.921598       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" with version 3231
I0911 18:27:00.921643       1 pv_controller.go:879] volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" entered phase "Failed"
I0911 18:27:00.921678       1 pv_controller.go:901] volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:27:00.921815       1 goroutinemap.go:150] Operation for "delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]" failed. No retries permitted until 2021-09-11 18:27:01.42170933 +0000 UTC m=+1504.158830991 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:27:00.922041       1 event.go:291] "Event occurred" object="pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
I0911 18:27:01.231733       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
I0911 18:27:01.234196       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac to the node "capz-4tyuov-md-0-pxbpw" mounted true
I0911 18:27:01.234217       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc to the node "capz-4tyuov-md-0-pxbpw" mounted false
I0911 18:27:01.315224       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":[{\"devicePath\":\"0\",\"name\":\"kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac\"}]}}" for node "capz-4tyuov-md-0-pxbpw" succeeded. VolumesAttached: [{kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac 0}]
I0911 18:27:01.315396       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc") on node "capz-4tyuov-md-0-pxbpw" 
... skipping 30 lines ...
I0911 18:27:11.802729       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is bound to claim azuredisk-5541/pvc-x6p5m
I0911 18:27:11.802766       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: claim azuredisk-5541/pvc-x6p5m found: phase: Bound, bound to: "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d", bindCompleted: true, boundByController: true
I0911 18:27:11.802795       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: all is bound
I0911 18:27:11.802820       1 pv_controller.go:858] updating PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: set phase Bound
I0911 18:27:11.802856       1 pv_controller.go:861] updating PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase Bound already set
I0911 18:27:11.802902       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" with version 3231
I0911 18:27:11.802984       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: phase: Failed, bound to: "azuredisk-5541/pvc-xrh5p (uid: 03d622fc-f747-4a7b-9193-5d2251c785bc)", boundByController: true
I0911 18:27:11.803018       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: volume is bound to claim azuredisk-5541/pvc-xrh5p
I0911 18:27:11.803048       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: claim azuredisk-5541/pvc-xrh5p not found
I0911 18:27:11.803059       1 pv_controller.go:1108] reclaimVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: policy is Delete
I0911 18:27:11.803075       1 pv_controller.go:1752] scheduleOperation[delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]]
I0911 18:27:11.803107       1 pv_controller.go:1231] deleteVolumeOperation [pvc-03d622fc-f747-4a7b-9193-5d2251c785bc] started
I0911 18:27:11.803334       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5541/pvc-7wq65" with version 3029
... skipping 27 lines ...
I0911 18:27:11.803690       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5541/pvc-x6p5m] status: phase Bound already set
I0911 18:27:11.803717       1 pv_controller.go:1038] volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" bound to claim "azuredisk-5541/pvc-x6p5m"
I0911 18:27:11.803731       1 pv_controller.go:1039] volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" status after binding: phase: Bound, bound to: "azuredisk-5541/pvc-x6p5m (uid: 354f5501-6fb6-4e2b-93e2-f716ca817c4d)", boundByController: true
I0911 18:27:11.803744       1 pv_controller.go:1040] claim "azuredisk-5541/pvc-x6p5m" status after binding: phase: Bound, bound to: "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d", bindCompleted: true, boundByController: true
I0911 18:27:11.807470       1 pv_controller.go:1340] isVolumeReleased[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: volume is released
I0911 18:27:11.807492       1 pv_controller.go:1404] doDeleteVolume [pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]
I0911 18:27:11.807544       1 pv_controller.go:1259] deletion of volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) since it's in attaching or detaching state
I0911 18:27:11.807560       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: set phase Failed
I0911 18:27:11.807569       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: phase Failed already set
E0911 18:27:11.807618       1 goroutinemap.go:150] Operation for "delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]" failed. No retries permitted until 2021-09-11 18:27:12.807577095 +0000 UTC m=+1515.544698856 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) since it's in attaching or detaching state
I0911 18:27:11.944999       1 gc_controller.go:161] GC'ing orphaned
I0911 18:27:11.945033       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:27:14.810450       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="90.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51718" resp=200
I0911 18:27:16.805652       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) returned with <nil>
I0911 18:27:16.805701       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc) succeeded
I0911 18:27:16.805983       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc was detached from node:capz-4tyuov-md-0-pxbpw
... skipping 2 lines ...
I0911 18:27:21.561371       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0911 18:27:23.792806       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 9 items received
I0911 18:27:24.816167       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="211.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:51816" resp=200
I0911 18:27:26.473616       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:27:26.802921       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:27:26.802993       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" with version 3231
I0911 18:27:26.803059       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: phase: Failed, bound to: "azuredisk-5541/pvc-xrh5p (uid: 03d622fc-f747-4a7b-9193-5d2251c785bc)", boundByController: true
I0911 18:27:26.803186       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5541/pvc-7wq65" with version 3029
I0911 18:27:26.803262       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: phase: Bound, bound to: "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac", bindCompleted: true, boundByController: true
I0911 18:27:26.803277       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: volume is bound to claim azuredisk-5541/pvc-xrh5p
I0911 18:27:26.803393       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: claim azuredisk-5541/pvc-xrh5p not found
I0911 18:27:26.803625       1 pv_controller.go:1108] reclaimVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: policy is Delete
I0911 18:27:26.803655       1 pv_controller.go:1752] scheduleOperation[delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]]
... skipping 49 lines ...
I0911 18:27:31.961524       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc
I0911 18:27:31.961553       1 pv_controller.go:1435] volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" deleted
I0911 18:27:31.961569       1 pv_controller.go:1283] deleteVolumeOperation [pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: success
I0911 18:27:31.966910       1 pv_protection_controller.go:205] Got event on PV pvc-03d622fc-f747-4a7b-9193-5d2251c785bc
I0911 18:27:31.966936       1 pv_protection_controller.go:125] Processing PV pvc-03d622fc-f747-4a7b-9193-5d2251c785bc
I0911 18:27:31.967250       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" with version 3278
I0911 18:27:31.967283       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: phase: Failed, bound to: "azuredisk-5541/pvc-xrh5p (uid: 03d622fc-f747-4a7b-9193-5d2251c785bc)", boundByController: true
I0911 18:27:31.967309       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: volume is bound to claim azuredisk-5541/pvc-xrh5p
I0911 18:27:31.967330       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: claim azuredisk-5541/pvc-xrh5p not found
I0911 18:27:31.967339       1 pv_controller.go:1108] reclaimVolume[pvc-03d622fc-f747-4a7b-9193-5d2251c785bc]: policy is Delete
I0911 18:27:31.967354       1 pv_controller.go:1752] scheduleOperation[delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]]
I0911 18:27:31.967363       1 pv_controller.go:1763] operation "delete-pvc-03d622fc-f747-4a7b-9193-5d2251c785bc[3bfac271-1b72-45f6-8b15-23b565fa95d8]" is already running, skipping
I0911 18:27:31.973487       1 pv_controller_base.go:235] volume "pvc-03d622fc-f747-4a7b-9193-5d2251c785bc" deleted
... skipping 189 lines ...
I0911 18:28:06.010068       1 pv_controller.go:1108] reclaimVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: policy is Delete
I0911 18:28:06.010080       1 pv_controller.go:1752] scheduleOperation[delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]]
I0911 18:28:06.010087       1 pv_controller.go:1763] operation "delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]" is already running, skipping
I0911 18:28:06.010112       1 pv_controller.go:1231] deleteVolumeOperation [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d] started
I0911 18:28:06.013579       1 pv_controller.go:1340] isVolumeReleased[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is released
I0911 18:28:06.013595       1 pv_controller.go:1404] doDeleteVolume [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]
I0911 18:28:06.036188       1 pv_controller.go:1259] deletion of volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:28:06.036211       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: set phase Failed
I0911 18:28:06.036220       1 pv_controller.go:858] updating PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: set phase Failed
I0911 18:28:06.039486       1 pv_protection_controller.go:205] Got event on PV pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d
I0911 18:28:06.039538       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" with version 3341
I0911 18:28:06.039583       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase: Failed, bound to: "azuredisk-5541/pvc-x6p5m (uid: 354f5501-6fb6-4e2b-93e2-f716ca817c4d)", boundByController: true
I0911 18:28:06.039611       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is bound to claim azuredisk-5541/pvc-x6p5m
I0911 18:28:06.039630       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: claim azuredisk-5541/pvc-x6p5m not found
I0911 18:28:06.039638       1 pv_controller.go:1108] reclaimVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: policy is Delete
I0911 18:28:06.039649       1 pv_controller.go:1752] scheduleOperation[delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]]
I0911 18:28:06.039656       1 pv_controller.go:1763] operation "delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]" is already running, skipping
I0911 18:28:06.041131       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" with version 3341
I0911 18:28:06.041155       1 pv_controller.go:879] volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" entered phase "Failed"
I0911 18:28:06.041203       1 pv_controller.go:901] volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
E0911 18:28:06.041346       1 goroutinemap.go:150] Operation for "delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]" failed. No retries permitted until 2021-09-11 18:28:06.541288569 +0000 UTC m=+1569.278410330 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:28:06.041430       1 event.go:291] "Event occurred" object="pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted"
I0911 18:28:11.475975       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:28:11.542959       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:28:11.804952       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:28:11.805193       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" with version 3027
I0911 18:28:11.805251       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase: Bound, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:28:11.805286       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is bound to claim azuredisk-5541/pvc-7wq65
I0911 18:28:11.805322       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: claim azuredisk-5541/pvc-7wq65 found: phase: Bound, bound to: "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac", bindCompleted: true, boundByController: true
I0911 18:28:11.805339       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: all is bound
I0911 18:28:11.805347       1 pv_controller.go:858] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: set phase Bound
I0911 18:28:11.805356       1 pv_controller.go:861] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase Bound already set
I0911 18:28:11.805399       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" with version 3341
I0911 18:28:11.805421       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase: Failed, bound to: "azuredisk-5541/pvc-x6p5m (uid: 354f5501-6fb6-4e2b-93e2-f716ca817c4d)", boundByController: true
I0911 18:28:11.805443       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is bound to claim azuredisk-5541/pvc-x6p5m
I0911 18:28:11.805481       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: claim azuredisk-5541/pvc-x6p5m not found
I0911 18:28:11.805508       1 pv_controller.go:1108] reclaimVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: policy is Delete
I0911 18:28:11.805524       1 pv_controller.go:1752] scheduleOperation[delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]]
I0911 18:28:11.805554       1 pv_controller.go:1231] deleteVolumeOperation [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d] started
I0911 18:28:11.805754       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5541/pvc-7wq65" with version 3029
... skipping 11 lines ...
I0911 18:28:11.807503       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5541/pvc-7wq65] status: phase Bound already set
I0911 18:28:11.807710       1 pv_controller.go:1038] volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" bound to claim "azuredisk-5541/pvc-7wq65"
I0911 18:28:11.807805       1 pv_controller.go:1039] volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" status after binding: phase: Bound, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:28:11.807888       1 pv_controller.go:1040] claim "azuredisk-5541/pvc-7wq65" status after binding: phase: Bound, bound to: "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac", bindCompleted: true, boundByController: true
I0911 18:28:11.818667       1 pv_controller.go:1340] isVolumeReleased[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is released
I0911 18:28:11.818687       1 pv_controller.go:1404] doDeleteVolume [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]
I0911 18:28:11.839273       1 pv_controller.go:1259] deletion of volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:28:11.839353       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: set phase Failed
I0911 18:28:11.839366       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase Failed already set
E0911 18:28:11.839396       1 goroutinemap.go:150] Operation for "delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]" failed. No retries permitted until 2021-09-11 18:28:12.839375036 +0000 UTC m=+1575.576496697 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), could not be deleted
I0911 18:28:11.946809       1 gc_controller.go:161] GC'ing orphaned
I0911 18:28:11.947125       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:28:11.973325       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-sgwmt"
I0911 18:28:11.973354       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d to the node "capz-4tyuov-md-0-sgwmt" mounted false
I0911 18:28:12.058873       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-4tyuov-md-0-sgwmt" succeeded. VolumesAttached: []
I0911 18:28:12.058970       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d") on node "capz-4tyuov-md-0-sgwmt" 
... skipping 25 lines ...
I0911 18:28:26.807069       1 pv_controller.go:861] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase Bound already set
I0911 18:28:26.807093       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: claim azuredisk-5541/pvc-7wq65 found: phase: Bound, bound to: "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac", bindCompleted: true, boundByController: true
I0911 18:28:26.807172       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: all is bound
I0911 18:28:26.807267       1 pv_controller.go:858] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: set phase Bound
I0911 18:28:26.807370       1 pv_controller.go:861] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase Bound already set
I0911 18:28:26.807392       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" with version 3341
I0911 18:28:26.807501       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase: Failed, bound to: "azuredisk-5541/pvc-x6p5m (uid: 354f5501-6fb6-4e2b-93e2-f716ca817c4d)", boundByController: true
I0911 18:28:26.807533       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is bound to claim azuredisk-5541/pvc-x6p5m
I0911 18:28:26.807153       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: binding to "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac"
I0911 18:28:26.807645       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: claim azuredisk-5541/pvc-x6p5m not found
I0911 18:28:26.807663       1 pv_controller.go:1108] reclaimVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: policy is Delete
I0911 18:28:26.807710       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: already bound to "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac"
I0911 18:28:26.807750       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5541/pvc-7wq65] status: set phase Bound
... skipping 2 lines ...
I0911 18:28:26.808023       1 pv_controller.go:1039] volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" status after binding: phase: Bound, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:28:26.807768       1 pv_controller.go:1752] scheduleOperation[delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]]
I0911 18:28:26.808210       1 pv_controller.go:1231] deleteVolumeOperation [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d] started
I0911 18:28:26.808301       1 pv_controller.go:1040] claim "azuredisk-5541/pvc-7wq65" status after binding: phase: Bound, bound to: "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac", bindCompleted: true, boundByController: true
I0911 18:28:26.813652       1 pv_controller.go:1340] isVolumeReleased[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is released
I0911 18:28:26.813690       1 pv_controller.go:1404] doDeleteVolume [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]
I0911 18:28:26.813938       1 pv_controller.go:1259] deletion of volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) since it's in attaching or detaching state
I0911 18:28:26.814004       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: set phase Failed
I0911 18:28:26.814134       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase Failed already set
E0911 18:28:26.814258       1 goroutinemap.go:150] Operation for "delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]" failed. No retries permitted until 2021-09-11 18:28:28.814161778 +0000 UTC m=+1591.551283539 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) since it's in attaching or detaching state
I0911 18:28:27.693517       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-sgwmt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) returned with <nil>
I0911 18:28:27.693562       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d) succeeded
I0911 18:28:27.693594       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d was detached from node:capz-4tyuov-md-0-sgwmt
I0911 18:28:27.693626       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d") on node "capz-4tyuov-md-0-sgwmt" 
I0911 18:28:30.430525       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 10 items received
I0911 18:28:31.948443       1 gc_controller.go:161] GC'ing orphaned
... skipping 9 lines ...
I0911 18:28:41.807269       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: phase: Bound, bound to: "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac", bindCompleted: true, boundByController: true
I0911 18:28:41.807361       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" with version 3341
I0911 18:28:41.807362       1 pv_controller.go:503] synchronizing bound PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" found: phase: Bound, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:28:41.807376       1 pv_controller.go:520] synchronizing bound PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: claim is already correctly bound
I0911 18:28:41.807385       1 pv_controller.go:1012] binding volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" to claim "azuredisk-5541/pvc-7wq65"
I0911 18:28:41.807395       1 pv_controller.go:910] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: binding to "azuredisk-5541/pvc-7wq65"
I0911 18:28:41.807415       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase: Failed, bound to: "azuredisk-5541/pvc-x6p5m (uid: 354f5501-6fb6-4e2b-93e2-f716ca817c4d)", boundByController: true
I0911 18:28:41.807417       1 pv_controller.go:922] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: already bound to "azuredisk-5541/pvc-7wq65"
I0911 18:28:41.807445       1 pv_controller.go:858] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: set phase Bound
I0911 18:28:41.807455       1 pv_controller.go:861] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase Bound already set
I0911 18:28:41.807459       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is bound to claim azuredisk-5541/pvc-x6p5m
I0911 18:28:41.807463       1 pv_controller.go:950] updating PersistentVolumeClaim[azuredisk-5541/pvc-7wq65]: binding to "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac"
I0911 18:28:41.807478       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: claim azuredisk-5541/pvc-x6p5m not found
... skipping 19 lines ...
I0911 18:28:46.981027       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d
I0911 18:28:46.981067       1 pv_controller.go:1435] volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" deleted
I0911 18:28:46.981079       1 pv_controller.go:1283] deleteVolumeOperation [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: success
I0911 18:28:46.995228       1 pv_protection_controller.go:205] Got event on PV pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d
I0911 18:28:46.995267       1 pv_protection_controller.go:125] Processing PV pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d
I0911 18:28:46.996163       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" with version 3404
I0911 18:28:46.996379       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: phase: Failed, bound to: "azuredisk-5541/pvc-x6p5m (uid: 354f5501-6fb6-4e2b-93e2-f716ca817c4d)", boundByController: true
I0911 18:28:46.996420       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: volume is bound to claim azuredisk-5541/pvc-x6p5m
I0911 18:28:46.996517       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: claim azuredisk-5541/pvc-x6p5m not found
I0911 18:28:46.996533       1 pv_controller.go:1108] reclaimVolume[pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d]: policy is Delete
I0911 18:28:46.996550       1 pv_controller.go:1752] scheduleOperation[delete-pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d[cabb47d0-5387-4b06-bc19-f7677d96ad63]]
I0911 18:28:46.996578       1 pv_controller.go:1231] deleteVolumeOperation [pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d] started
I0911 18:28:47.001399       1 pv_controller.go:1243] Volume "pvc-354f5501-6fb6-4e2b-93e2-f716ca817c4d" is already being deleted
... skipping 153 lines ...
I0911 18:29:22.009152       1 pv_controller.go:1108] reclaimVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: policy is Delete
I0911 18:29:22.009164       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]]
I0911 18:29:22.009176       1 pv_controller.go:1763] operation "delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]" is already running, skipping
I0911 18:29:22.009211       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac] started
I0911 18:29:22.012465       1 pv_controller.go:1340] isVolumeReleased[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is released
I0911 18:29:22.012487       1 pv_controller.go:1404] doDeleteVolume [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]
I0911 18:29:22.045234       1 pv_controller.go:1259] deletion of volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:29:22.045260       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: set phase Failed
I0911 18:29:22.045293       1 pv_controller.go:858] updating PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: set phase Failed
I0911 18:29:22.050155       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" with version 3468
I0911 18:29:22.050191       1 pv_controller.go:879] volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" entered phase "Failed"
I0911 18:29:22.050204       1 pv_controller.go:901] volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:29:22.050252       1 goroutinemap.go:150] Operation for "delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]" failed. No retries permitted until 2021-09-11 18:29:22.550230966 +0000 UTC m=+1645.287352627 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:29:22.051337       1 event.go:291] "Event occurred" object="pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
I0911 18:29:22.051751       1 pv_protection_controller.go:205] Got event on PV pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac
I0911 18:29:22.051869       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" with version 3468
I0911 18:29:22.051991       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase: Failed, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:29:22.052208       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is bound to claim azuredisk-5541/pvc-7wq65
I0911 18:29:22.052402       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: claim azuredisk-5541/pvc-7wq65 not found
I0911 18:29:22.052496       1 pv_controller.go:1108] reclaimVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: policy is Delete
I0911 18:29:22.052679       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]]
I0911 18:29:22.052847       1 pv_controller.go:1765] operation "delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]" postponed due to exponential backoff
I0911 18:29:22.505174       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 34 items received
... skipping 10 lines ...
I0911 18:29:23.883851       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RuntimeClass total 0 items received
I0911 18:29:23.893525       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-pxbpw ReadyCondition updated. Updating timestamp.
I0911 18:29:24.811091       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="83.5µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:52966" resp=200
I0911 18:29:26.477702       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:29:26.808935       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:29:26.809103       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" with version 3468
I0911 18:29:26.809203       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase: Failed, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:29:26.809280       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is bound to claim azuredisk-5541/pvc-7wq65
I0911 18:29:26.809319       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: claim azuredisk-5541/pvc-7wq65 not found
I0911 18:29:26.809330       1 pv_controller.go:1108] reclaimVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: policy is Delete
I0911 18:29:26.809362       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]]
I0911 18:29:26.809425       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac] started
I0911 18:29:26.816502       1 pv_controller.go:1340] isVolumeReleased[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is released
I0911 18:29:26.816519       1 pv_controller.go:1404] doDeleteVolume [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]
I0911 18:29:26.816552       1 pv_controller.go:1259] deletion of volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) since it's in attaching or detaching state
I0911 18:29:26.816565       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: set phase Failed
I0911 18:29:26.816573       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase Failed already set
E0911 18:29:26.816616       1 goroutinemap.go:150] Operation for "delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]" failed. No retries permitted until 2021-09-11 18:29:27.816580784 +0000 UTC m=+1650.553702445 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) since it's in attaching or detaching state
I0911 18:29:31.951242       1 gc_controller.go:161] GC'ing orphaned
I0911 18:29:31.951281       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:29:34.811931       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="95.601µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:53064" resp=200
I0911 18:29:35.520655       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ClusterRole total 0 items received
I0911 18:29:38.098846       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) returned with <nil>
I0911 18:29:38.098896       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac) succeeded
I0911 18:29:38.099122       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac was detached from node:capz-4tyuov-md-0-pxbpw
I0911 18:29:38.099180       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac") on node "capz-4tyuov-md-0-pxbpw" 
I0911 18:29:41.478005       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:29:41.546211       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:29:41.809592       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:29:41.809668       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" with version 3468
I0911 18:29:41.809729       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase: Failed, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:29:41.809767       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is bound to claim azuredisk-5541/pvc-7wq65
I0911 18:29:41.809788       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: claim azuredisk-5541/pvc-7wq65 not found
I0911 18:29:41.809796       1 pv_controller.go:1108] reclaimVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: policy is Delete
I0911 18:29:41.809813       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]]
I0911 18:29:41.809869       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac] started
I0911 18:29:41.815001       1 pv_controller.go:1340] isVolumeReleased[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is released
... skipping 2 lines ...
I0911 18:29:46.991522       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac
I0911 18:29:46.991554       1 pv_controller.go:1435] volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" deleted
I0911 18:29:46.991567       1 pv_controller.go:1283] deleteVolumeOperation [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: success
I0911 18:29:47.004016       1 pv_protection_controller.go:205] Got event on PV pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac
I0911 18:29:47.004062       1 pv_protection_controller.go:125] Processing PV pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac
I0911 18:29:47.004316       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" with version 3508
I0911 18:29:47.004553       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: phase: Failed, bound to: "azuredisk-5541/pvc-7wq65 (uid: a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac)", boundByController: true
I0911 18:29:47.004716       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: volume is bound to claim azuredisk-5541/pvc-7wq65
I0911 18:29:47.005210       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: claim azuredisk-5541/pvc-7wq65 not found
I0911 18:29:47.005493       1 pv_controller.go:1108] reclaimVolume[pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac]: policy is Delete
I0911 18:29:47.006440       1 pv_controller.go:1752] scheduleOperation[delete-pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac[309ab063-4ff6-47eb-b9d8-be95bde19430]]
I0911 18:29:47.006543       1 pv_controller.go:1231] deleteVolumeOperation [pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac] started
I0911 18:29:47.009871       1 pv_controller.go:1243] Volume "pvc-a01ccd28-f977-43f5-bb6c-4fc0a6fe36ac" is already being deleted
... skipping 36 lines ...
I0911 18:29:52.329335       1 disruption.go:418] No matching pdb for pod "azuredisk-volume-tester-k2dhq-d5d45df45-zfx6f"
I0911 18:29:52.329372       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45-zfx6f"
I0911 18:29:52.341054       1 controller_utils.go:581] Controller azuredisk-volume-tester-k2dhq-d5d45df45 created pod azuredisk-volume-tester-k2dhq-d5d45df45-zfx6f
I0911 18:29:52.341123       1 replica_set_utils.go:59] Updating status for : azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0911 18:29:52.341717       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azuredisk-volume-tester-k2dhq-d5d45df45-zfx6f"
I0911 18:29:52.342537       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="41.641264ms"
I0911 18:29:52.342599       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-k2dhq\": the object has been modified; please apply your changes to the latest version and try again"
I0911 18:29:52.342671       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:29:52.342648879 +0000 UTC m=+1675.079770640"
I0911 18:29:52.343265       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-k2dhq" timed out (false) [last progress check: 2021-09-11 18:29:52 +0000 UTC - now: 2021-09-11 18:29:52.343259183 +0000 UTC m=+1675.080380944]
I0911 18:29:52.343862       1 pvc_protection_controller.go:353] "Got event on PVC" azuredisk-5356/pvc-mhpv8="(MISSING)"
I0911 18:29:52.343918       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5356/pvc-mhpv8" with version 3531
I0911 18:29:52.344107       1 pv_controller.go:253] synchronizing PersistentVolumeClaim[azuredisk-5356/pvc-mhpv8]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0911 18:29:52.344285       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-5356/pvc-mhpv8]: no volume found
... skipping 20 lines ...
I0911 18:29:52.365241       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d StorageAccountType:StandardSSD_LRS Size:10
I0911 18:29:52.365831       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45"
I0911 18:29:52.366025       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45" (14.361991ms)
I0911 18:29:52.366196       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45", timestamp:time.Time{wall:0xc0475b4812799b95, ext:1675047081234, loc:(*time.Location)(0x7504dc0)}}
I0911 18:29:52.366335       1 replica_set.go:653] Finished syncing ReplicaSet "azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45" (206.801µs)
I0911 18:29:52.368246       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="9.46206ms"
I0911 18:29:52.368291       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-k2dhq\": the object has been modified; please apply your changes to the latest version and try again"
I0911 18:29:52.368459       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:29:52.368426043 +0000 UTC m=+1675.105547704"
I0911 18:29:52.375825       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="7.362246ms"
I0911 18:29:52.375889       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:29:52.37586689 +0000 UTC m=+1675.112988551"
I0911 18:29:52.376823       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq"
I0911 18:29:52.382746       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="6.848143ms"
I0911 18:29:52.382796       1 deployment_controller.go:490] "Error syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-k2dhq\": the object has been modified; please apply your changes to the latest version and try again"
I0911 18:29:52.382831       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:29:52.382812434 +0000 UTC m=+1675.119934095"
I0911 18:29:52.383181       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-k2dhq" timed out (false) [last progress check: 2021-09-11 18:29:52 +0000 UTC - now: 2021-09-11 18:29:52.383176836 +0000 UTC m=+1675.120298497]
I0911 18:29:52.383220       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-k2dhq" for a progress check after 599s
I0911 18:29:52.383243       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="419.002µs"
I0911 18:29:52.388714       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:29:52.388680071 +0000 UTC m=+1675.125801732"
I0911 18:29:52.389063       1 deployment_util.go:808] Deployment "azuredisk-volume-tester-k2dhq" timed out (false) [last progress check: 2021-09-11 18:29:52 +0000 UTC - now: 2021-09-11 18:29:52.389059573 +0000 UTC m=+1675.126181234]
... skipping 241 lines ...
I0911 18:30:20.290119       1 replica_set_utils.go:59] Updating status for : azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0911 18:30:20.290676       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq"
I0911 18:30:20.290720       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45"
I0911 18:30:20.290845       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="17.540791ms"
I0911 18:30:20.290895       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:30:20.290874288 +0000 UTC m=+1703.027996049"
I0911 18:30:20.297937       1 deployment_controller.go:176] "Updating deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq"
W0911 18:30:20.300038       1 reconciler.go:376] Multi-Attach error for volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d") from node "capz-4tyuov-md-0-pxbpw" Volume is already used by pods azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45-zfx6f on node capz-4tyuov-md-0-sgwmt
I0911 18:30:20.300229       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="9.337048ms"
I0911 18:30:20.300271       1 deployment_controller.go:576] "Started syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" startTime="2021-09-11 18:30:20.300248736 +0000 UTC m=+1703.037370397"
I0911 18:30:20.301662       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-k2dhq" for a progress check after 594s
I0911 18:30:20.300439       1 event.go:291] "Event occurred" object="azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45-rkb75" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d\" Volume is already used by pod(s) azuredisk-volume-tester-k2dhq-d5d45df45-zfx6f"
I0911 18:30:20.302036       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azuredisk-5356/azuredisk-volume-tester-k2dhq" duration="1.768309ms"
I0911 18:30:20.306983       1 replica_set.go:443] Pod azuredisk-volume-tester-k2dhq-d5d45df45-rkb75 updated, objectMeta {Name:azuredisk-volume-tester-k2dhq-d5d45df45-rkb75 GenerateName:azuredisk-volume-tester-k2dhq-d5d45df45- Namespace:azuredisk-5356 SelfLink: UID:a5b46a3f-b1c2-40a2-8678-261ee1928f59 ResourceVersion:3632 Generation:0 CreationTimestamp:2021-09-11 18:30:20 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-1598098976185383115 pod-template-hash:d5d45df45] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-k2dhq-d5d45df45 UID:893f995f-ca7d-4d52-8a51-df753e46d74e Controller:0xc002c05557 BlockOwnerDeletion:0xc002c05558}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:30:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893f995f-ca7d-4d52-8a51-df753e46d74e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azuredisk-volume-tester-k2dhq-d5d45df45-rkb75 GenerateName:azuredisk-volume-tester-k2dhq-d5d45df45- Namespace:azuredisk-5356 SelfLink: UID:a5b46a3f-b1c2-40a2-8678-261ee1928f59 ResourceVersion:3638 Generation:0 CreationTimestamp:2021-09-11 18:30:20 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azuredisk-volume-tester-1598098976185383115 pod-template-hash:d5d45df45] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azuredisk-volume-tester-k2dhq-d5d45df45 UID:893f995f-ca7d-4d52-8a51-df753e46d74e Controller:0xc002384da0 BlockOwnerDeletion:0xc002384da1}] Finalizers:[] ClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2021-09-11 18:30:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"893f995f-ca7d-4d52-8a51-df753e46d74e\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2021-09-11 18:30:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0911 18:30:20.307169       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-k2dhq-d5d45df45-rkb75"
I0911 18:30:20.307198       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-k2dhq-d5d45df45-rkb75, PodDisruptionBudget controller will avoid syncing.
I0911 18:30:20.307207       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-k2dhq-d5d45df45-rkb75"
I0911 18:30:20.311490       1 replica_set_utils.go:59] Updating status for : azuredisk-5356/azuredisk-volume-tester-k2dhq-d5d45df45, replicas 1->1 (need 1), fullyLabeledReplicas 1->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 401 lines ...
I0911 18:32:03.313457       1 pv_controller.go:1231] deleteVolumeOperation [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d] started
I0911 18:32:03.313411       1 pv_controller.go:1108] reclaimVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: policy is Delete
I0911 18:32:03.314052       1 pv_controller.go:1752] scheduleOperation[delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]]
I0911 18:32:03.314096       1 pv_controller.go:1763] operation "delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]" is already running, skipping
I0911 18:32:03.316414       1 pv_controller.go:1340] isVolumeReleased[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is released
I0911 18:32:03.316431       1 pv_controller.go:1404] doDeleteVolume [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]
I0911 18:32:03.339613       1 pv_controller.go:1259] deletion of volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:32:03.339863       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: set phase Failed
I0911 18:32:03.339946       1 pv_controller.go:858] updating PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: set phase Failed
I0911 18:32:03.344578       1 pv_protection_controller.go:205] Got event on PV pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d
I0911 18:32:03.344615       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" with version 3819
I0911 18:32:03.344677       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: phase: Failed, bound to: "azuredisk-5356/pvc-mhpv8 (uid: 62b63e5d-c15b-4911-a40f-ea56baa68f0d)", boundByController: true
I0911 18:32:03.344702       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is bound to claim azuredisk-5356/pvc-mhpv8
I0911 18:32:03.344721       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: claim azuredisk-5356/pvc-mhpv8 not found
I0911 18:32:03.344729       1 pv_controller.go:1108] reclaimVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: policy is Delete
I0911 18:32:03.344741       1 pv_controller.go:1752] scheduleOperation[delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]]
I0911 18:32:03.344748       1 pv_controller.go:1763] operation "delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]" is already running, skipping
I0911 18:32:03.345669       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" with version 3819
I0911 18:32:03.345697       1 pv_controller.go:879] volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" entered phase "Failed"
I0911 18:32:03.345706       1 pv_controller.go:901] volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:32:03.345742       1 goroutinemap.go:150] Operation for "delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]" failed. No retries permitted until 2021-09-11 18:32:03.84572618 +0000 UTC m=+1806.582847941 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:32:03.346000       1 event.go:291] "Event occurred" object="pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
I0911 18:32:03.354747       1 actual_state_of_world.go:427] Set detach request time to current time for volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d on node "capz-4tyuov-md-0-pxbpw"
I0911 18:32:03.731429       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
I0911 18:32:03.731564       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d to the node "capz-4tyuov-md-0-pxbpw" mounted false
I0911 18:32:03.769218       1 node_status_updater.go:106] Updating status "{\"status\":{\"volumesAttached\":null}}" for node "capz-4tyuov-md-0-pxbpw" succeeded. VolumesAttached: []
I0911 18:32:03.769316       1 reconciler.go:219] attacherDetacher.DetachVolume started for volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d") on node "capz-4tyuov-md-0-pxbpw" 
... skipping 14 lines ...
I0911 18:32:11.484248       1 controller.go:720] It took 0.000190801 seconds to finish nodeSyncInternal
I0911 18:32:11.486201       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:32:11.549242       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:32:11.601036       1 resource_quota_controller.go:194] Resource quota controller queued all resource quota for full calculation of usage
I0911 18:32:11.818133       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:32:11.818335       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" with version 3819
I0911 18:32:11.818452       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: phase: Failed, bound to: "azuredisk-5356/pvc-mhpv8 (uid: 62b63e5d-c15b-4911-a40f-ea56baa68f0d)", boundByController: true
I0911 18:32:11.818504       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is bound to claim azuredisk-5356/pvc-mhpv8
I0911 18:32:11.818596       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: claim azuredisk-5356/pvc-mhpv8 not found
I0911 18:32:11.818607       1 pv_controller.go:1108] reclaimVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: policy is Delete
I0911 18:32:11.818625       1 pv_controller.go:1752] scheduleOperation[delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]]
I0911 18:32:11.818736       1 pv_controller.go:1231] deleteVolumeOperation [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d] started
I0911 18:32:11.835710       1 pv_controller.go:1340] isVolumeReleased[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is released
I0911 18:32:11.835730       1 pv_controller.go:1404] doDeleteVolume [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]
I0911 18:32:11.835765       1 pv_controller.go:1259] deletion of volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) since it's in attaching or detaching state
I0911 18:32:11.835781       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: set phase Failed
I0911 18:32:11.835792       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: phase Failed already set
E0911 18:32:11.835831       1 goroutinemap.go:150] Operation for "delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]" failed. No retries permitted until 2021-09-11 18:32:12.835805767 +0000 UTC m=+1815.572927428 (durationBeforeRetry 1s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) since it's in attaching or detaching state
I0911 18:32:11.957403       1 gc_controller.go:161] GC'ing orphaned
I0911 18:32:11.957433       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:32:14.811265       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="100.701µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54628" resp=200
I0911 18:32:19.275316       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) returned with <nil>
I0911 18:32:19.275374       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d) succeeded
I0911 18:32:19.275385       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d was detached from node:capz-4tyuov-md-0-pxbpw
I0911 18:32:19.275564       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d") on node "capz-4tyuov-md-0-pxbpw" 
I0911 18:32:21.860825       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0911 18:32:24.812680       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="87µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:54720" resp=200
I0911 18:32:26.486834       1 reflector.go:381] k8s.io/client-go/informers/factory.go:134: forcing resync
I0911 18:32:26.818579       1 pv_controller_base.go:528] resyncing PV controller
I0911 18:32:26.818704       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" with version 3819
I0911 18:32:26.818786       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: phase: Failed, bound to: "azuredisk-5356/pvc-mhpv8 (uid: 62b63e5d-c15b-4911-a40f-ea56baa68f0d)", boundByController: true
I0911 18:32:26.818869       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is bound to claim azuredisk-5356/pvc-mhpv8
I0911 18:32:26.818885       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: claim azuredisk-5356/pvc-mhpv8 not found
I0911 18:32:26.818924       1 pv_controller.go:1108] reclaimVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: policy is Delete
I0911 18:32:26.818943       1 pv_controller.go:1752] scheduleOperation[delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]]
I0911 18:32:26.818973       1 pv_controller.go:1231] deleteVolumeOperation [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d] started
I0911 18:32:26.830953       1 pv_controller.go:1340] isVolumeReleased[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is released
... skipping 4 lines ...
I0911 18:32:31.957987       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:32:31.986364       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d
I0911 18:32:31.986490       1 pv_controller.go:1435] volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" deleted
I0911 18:32:31.986550       1 pv_controller.go:1283] deleteVolumeOperation [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: success
I0911 18:32:31.998088       1 pv_protection_controller.go:205] Got event on PV pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d
I0911 18:32:31.998351       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d" with version 3862
I0911 18:32:31.998452       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: phase: Failed, bound to: "azuredisk-5356/pvc-mhpv8 (uid: 62b63e5d-c15b-4911-a40f-ea56baa68f0d)", boundByController: true
I0911 18:32:31.998490       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: volume is bound to claim azuredisk-5356/pvc-mhpv8
I0911 18:32:31.998514       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: claim azuredisk-5356/pvc-mhpv8 not found
I0911 18:32:31.998524       1 pv_controller.go:1108] reclaimVolume[pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d]: policy is Delete
I0911 18:32:31.998542       1 pv_controller.go:1752] scheduleOperation[delete-pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d[1f545ec5-30d2-4ddf-aee3-be44686e3fe3]]
I0911 18:32:31.998572       1 pv_controller.go:1231] deleteVolumeOperation [pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d] started
I0911 18:32:31.998745       1 pv_protection_controller.go:125] Processing PV pvc-62b63e5d-c15b-4911-a40f-ea56baa68f0d
... skipping 290 lines ...
I0911 18:32:53.424479       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3090
I0911 18:32:53.480049       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-3090, name kube-root-ca.crt, uid 1800c961-a520-4516-82ae-c7bf52ca4ed4, event type delete
I0911 18:32:53.485026       1 publisher.go:186] Finished syncing namespace "azuredisk-3090" (4.826333ms)
I0911 18:32:53.499101       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3090, name default-token-bd7dl, uid 925f22e5-a60b-4aa6-a8f0-c4b6b7ebc211, event type delete
I0911 18:32:53.512618       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3090, name default, uid cccc1c4c-5c80-4eaa-abf1-17b991df5158, event type delete
I0911 18:32:53.512710       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3090" (2.8µs)
E0911 18:32:53.518468       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3090/default: secrets "default-token-mkk7m" is forbidden: unable to create new content in namespace azuredisk-3090 because it is being terminated
I0911 18:32:53.518519       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3090/default), service account deleted, removing tokens
I0911 18:32:53.564644       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-3090, name pvc-25vgs.16a3d827ffc1ecad, uid 1b13c1fb-57ff-460f-ad11-1972b8e86b4c, event type delete
I0911 18:32:53.617892       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-3090, estimate: 0, errors: <nil>
I0911 18:32:53.618255       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-3090" (3µs)
I0911 18:32:53.627980       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-3090" (208.661005ms)
I0911 18:32:54.319714       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4078
... skipping 193 lines ...
I0911 18:32:54.741794       1 pv_controller.go:1038] volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" bound to claim "azuredisk-8510/pvc-2w9c7"
I0911 18:32:54.741804       1 pv_controller.go:1039] volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" status after binding: phase: Bound, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:32:54.741813       1 pv_controller.go:1040] claim "azuredisk-8510/pvc-2w9c7" status after binding: phase: Bound, bound to: "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822", bindCompleted: true, boundByController: true
I0911 18:32:54.811908       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="77.3µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55008" resp=200
I0911 18:32:55.179380       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3721
I0911 18:32:55.240748       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3721, name default-token-8sdlv, uid 6c615b3f-1d92-47cf-a675-d1b494140996, event type delete
E0911 18:32:55.263307       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3721/default: secrets "default-token-znwb5" is forbidden: unable to create new content in namespace azuredisk-3721 because it is being terminated
I0911 18:32:55.295933       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-8510/azuredisk-volume-tester-z7gh7"
I0911 18:32:55.296114       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-z7gh7"
I0911 18:32:55.296196       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-z7gh7, PodDisruptionBudget controller will avoid syncing.
I0911 18:32:55.296230       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-z7gh7"
I0911 18:32:55.311240       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-z7gh7"
I0911 18:32:55.311744       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-z7gh7, PodDisruptionBudget controller will avoid syncing.
... skipping 383 lines ...
I0911 18:33:31.172253       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
I0911 18:33:31.172300       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]]
I0911 18:33:31.172382       1 pv_controller.go:1763] operation "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" is already running, skipping
I0911 18:33:31.172471       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822] started
I0911 18:33:31.174194       1 pv_controller.go:1340] isVolumeReleased[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is released
I0911 18:33:31.174285       1 pv_controller.go:1404] doDeleteVolume [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]
I0911 18:33:31.233501       1 pv_controller.go:1259] deletion of volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:33:31.233520       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: set phase Failed
I0911 18:33:31.233529       1 pv_controller.go:858] updating PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: set phase Failed
I0911 18:33:31.237186       1 pv_protection_controller.go:205] Got event on PV pvc-1ed7a2dc-3589-494b-a905-fb03beba2822
I0911 18:33:31.237219       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4097
I0911 18:33:31.237242       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase: Failed, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:33:31.237284       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is bound to claim azuredisk-8510/pvc-2w9c7
I0911 18:33:31.237317       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: claim azuredisk-8510/pvc-2w9c7 not found
I0911 18:33:31.237324       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
I0911 18:33:31.237334       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]]
I0911 18:33:31.237340       1 pv_controller.go:1763] operation "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" is already running, skipping
I0911 18:33:31.238190       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4097
I0911 18:33:31.238218       1 pv_controller.go:879] volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" entered phase "Failed"
I0911 18:33:31.238228       1 pv_controller.go:901] volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:33:31.238302       1 goroutinemap.go:150] Operation for "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" failed. No retries permitted until 2021-09-11 18:33:31.738265003 +0000 UTC m=+1894.475386664 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:33:31.238923       1 event.go:291] "Event occurred" object="pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
I0911 18:33:31.960497       1 gc_controller.go:161] GC'ing orphaned
I0911 18:33:31.960530       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:33:32.206082       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0911 18:33:33.646814       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1beta1.CSIStorageCapacity total 3 items received
I0911 18:33:34.367500       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
... skipping 57 lines ...
I0911 18:33:41.826935       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: all is bound
I0911 18:33:41.826942       1 pv_controller.go:858] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: set phase Bound
I0911 18:33:41.826957       1 pv_controller.go:861] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: phase Bound already set
I0911 18:33:41.826957       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-8510/pvc-xn48g]: already bound to "pvc-9d3e4f94-f45a-4a6d-b447-8521b4432a9e"
I0911 18:33:41.826966       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-8510/pvc-xn48g] status: set phase Bound
I0911 18:33:41.826968       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4097
I0911 18:33:41.826984       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase: Failed, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:33:41.826990       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8510/pvc-xn48g] status: phase Bound already set
I0911 18:33:41.827000       1 pv_controller.go:1038] volume "pvc-9d3e4f94-f45a-4a6d-b447-8521b4432a9e" bound to claim "azuredisk-8510/pvc-xn48g"
I0911 18:33:41.827007       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is bound to claim azuredisk-8510/pvc-2w9c7
I0911 18:33:41.827016       1 pv_controller.go:1039] volume "pvc-9d3e4f94-f45a-4a6d-b447-8521b4432a9e" status after binding: phase: Bound, bound to: "azuredisk-8510/pvc-xn48g (uid: 9d3e4f94-f45a-4a6d-b447-8521b4432a9e)", boundByController: true
I0911 18:33:41.827024       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: claim azuredisk-8510/pvc-2w9c7 not found
I0911 18:33:41.827031       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
... skipping 15 lines ...
I0911 18:33:41.827277       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-8510/pvc-bgf65] status: phase Bound already set
I0911 18:33:41.827287       1 pv_controller.go:1038] volume "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99" bound to claim "azuredisk-8510/pvc-bgf65"
I0911 18:33:41.827323       1 pv_controller.go:1039] volume "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99" status after binding: phase: Bound, bound to: "azuredisk-8510/pvc-bgf65 (uid: 4ad799e6-16c8-4789-8ee3-25e9c4917c99)", boundByController: true
I0911 18:33:41.827336       1 pv_controller.go:1040] claim "azuredisk-8510/pvc-bgf65" status after binding: phase: Bound, bound to: "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99", bindCompleted: true, boundByController: true
I0911 18:33:41.831442       1 pv_controller.go:1340] isVolumeReleased[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is released
I0911 18:33:41.831572       1 pv_controller.go:1404] doDeleteVolume [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]
I0911 18:33:41.853620       1 pv_controller.go:1259] deletion of volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:33:41.853644       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: set phase Failed
I0911 18:33:41.853654       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase Failed already set
E0911 18:33:41.853723       1 goroutinemap.go:150] Operation for "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" failed. No retries permitted until 2021-09-11 18:33:42.85368587 +0000 UTC m=+1905.590807631 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:33:42.664299       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 43 items received
I0911 18:33:43.134156       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 0 items received
I0911 18:33:44.810616       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="64.4µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55500" resp=200
I0911 18:33:47.516669       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 39 items received
I0911 18:33:49.364967       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.RoleBinding total 0 items received
I0911 18:33:49.848970       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-9d3e4f94-f45a-4a6d-b447-8521b4432a9e) returned with <nil>
... skipping 58 lines ...
I0911 18:33:56.828525       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: volume is bound to claim azuredisk-8510/pvc-bgf65
I0911 18:33:56.828549       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: claim azuredisk-8510/pvc-bgf65 found: phase: Bound, bound to: "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99", bindCompleted: true, boundByController: true
I0911 18:33:56.828584       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: all is bound
I0911 18:33:56.828593       1 pv_controller.go:858] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: set phase Bound
I0911 18:33:56.828602       1 pv_controller.go:861] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: phase Bound already set
I0911 18:33:56.828616       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4097
I0911 18:33:56.828658       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase: Failed, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:33:56.828700       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is bound to claim azuredisk-8510/pvc-2w9c7
I0911 18:33:56.828737       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: claim azuredisk-8510/pvc-2w9c7 not found
I0911 18:33:56.828751       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
I0911 18:33:56.828769       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]]
I0911 18:33:56.828813       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822] started
I0911 18:33:56.836140       1 pv_controller.go:1340] isVolumeReleased[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is released
I0911 18:33:56.836164       1 pv_controller.go:1404] doDeleteVolume [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]
I0911 18:33:56.875016       1 pv_controller.go:1259] deletion of volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:33:56.875036       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: set phase Failed
I0911 18:33:56.875047       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase Failed already set
E0911 18:33:56.875122       1 goroutinemap.go:150] Operation for "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" failed. No retries permitted until 2021-09-11 18:33:58.875095934 +0000 UTC m=+1921.612217695 (durationBeforeRetry 2s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:33:59.651238       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Ingress total 3 items received
I0911 18:34:00.185699       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99) returned with <nil>
I0911 18:34:00.185739       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99) succeeded
I0911 18:34:00.185750       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99 was detached from node:capz-4tyuov-md-0-pxbpw
I0911 18:34:00.185785       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99") on node "capz-4tyuov-md-0-pxbpw" 
I0911 18:34:00.217426       1 azure_controller_standard.go:143] azureDisk - detach disk: name "" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822"
... skipping 46 lines ...
I0911 18:34:11.829943       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: volume is bound to claim azuredisk-8510/pvc-bgf65
I0911 18:34:11.829993       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: claim azuredisk-8510/pvc-bgf65 found: phase: Bound, bound to: "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99", bindCompleted: true, boundByController: true
I0911 18:34:11.830028       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: all is bound
I0911 18:34:11.830058       1 pv_controller.go:858] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: set phase Bound
I0911 18:34:11.830106       1 pv_controller.go:861] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: phase Bound already set
I0911 18:34:11.830153       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4097
I0911 18:34:11.830215       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase: Failed, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:34:11.830297       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is bound to claim azuredisk-8510/pvc-2w9c7
I0911 18:34:11.830317       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: claim azuredisk-8510/pvc-2w9c7 not found
I0911 18:34:11.830328       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
I0911 18:34:11.830442       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]]
I0911 18:34:11.830521       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822] started
I0911 18:34:11.848519       1 pv_controller.go:1340] isVolumeReleased[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is released
I0911 18:34:11.848544       1 pv_controller.go:1404] doDeleteVolume [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]
I0911 18:34:11.848573       1 pv_controller.go:1259] deletion of volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) since it's in attaching or detaching state
I0911 18:34:11.848581       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: set phase Failed
I0911 18:34:11.848587       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase Failed already set
E0911 18:34:11.848615       1 goroutinemap.go:150] Operation for "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" failed. No retries permitted until 2021-09-11 18:34:15.848594055 +0000 UTC m=+1938.585715716 (durationBeforeRetry 4s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) since it's in attaching or detaching state
I0911 18:34:11.960971       1 gc_controller.go:161] GC'ing orphaned
I0911 18:34:11.961006       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:34:14.811012       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="80µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:55794" resp=200
I0911 18:34:15.548367       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) returned with <nil>
I0911 18:34:15.548402       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822) succeeded
I0911 18:34:15.548620       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822 was detached from node:capz-4tyuov-md-0-pxbpw
... skipping 42 lines ...
I0911 18:34:26.831551       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: volume is bound to claim azuredisk-8510/pvc-bgf65
I0911 18:34:26.831573       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: claim azuredisk-8510/pvc-bgf65 found: phase: Bound, bound to: "pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99", bindCompleted: true, boundByController: true
I0911 18:34:26.831646       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: all is bound
I0911 18:34:26.831661       1 pv_controller.go:858] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: set phase Bound
I0911 18:34:26.831669       1 pv_controller.go:861] updating PersistentVolume[pvc-4ad799e6-16c8-4789-8ee3-25e9c4917c99]: phase Bound already set
I0911 18:34:26.831682       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4097
I0911 18:34:26.831700       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase: Failed, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:34:26.831722       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is bound to claim azuredisk-8510/pvc-2w9c7
I0911 18:34:26.831738       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: claim azuredisk-8510/pvc-2w9c7 not found
I0911 18:34:26.831748       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
I0911 18:34:26.831850       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]]
I0911 18:34:26.831959       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-9d3e4f94-f45a-4a6d-b447-8521b4432a9e" with version 3994
I0911 18:34:26.831961       1 pv_controller.go:1231] deleteVolumeOperation [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822] started
... skipping 11 lines ...
I0911 18:34:32.030828       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822
I0911 18:34:32.030863       1 pv_controller.go:1435] volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" deleted
I0911 18:34:32.030878       1 pv_controller.go:1283] deleteVolumeOperation [pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: success
I0911 18:34:32.039137       1 pv_protection_controller.go:205] Got event on PV pvc-1ed7a2dc-3589-494b-a905-fb03beba2822
I0911 18:34:32.039164       1 pv_protection_controller.go:125] Processing PV pvc-1ed7a2dc-3589-494b-a905-fb03beba2822
I0911 18:34:32.039565       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-1ed7a2dc-3589-494b-a905-fb03beba2822" with version 4186
I0911 18:34:32.039600       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: phase: Failed, bound to: "azuredisk-8510/pvc-2w9c7 (uid: 1ed7a2dc-3589-494b-a905-fb03beba2822)", boundByController: true
I0911 18:34:32.039627       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: volume is bound to claim azuredisk-8510/pvc-2w9c7
I0911 18:34:32.039652       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: claim azuredisk-8510/pvc-2w9c7 not found
I0911 18:34:32.039661       1 pv_controller.go:1108] reclaimVolume[pvc-1ed7a2dc-3589-494b-a905-fb03beba2822]: policy is Delete
I0911 18:34:32.039678       1 pv_controller.go:1752] scheduleOperation[delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]]
I0911 18:34:32.039686       1 pv_controller.go:1763] operation "delete-pvc-1ed7a2dc-3589-494b-a905-fb03beba2822[d63ec547-b846-470b-a488-5b3d14d216c0]" is already running, skipping
I0911 18:34:32.044143       1 pv_protection_controller.go:183] Removed protection finalizer from PV pvc-1ed7a2dc-3589-494b-a905-fb03beba2822
... skipping 368 lines ...
I0911 18:35:01.577325       1 azure_controller_standard.go:93] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - attach disk(capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) with DiskEncryptionSetID()
I0911 18:35:02.365984       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-8510
I0911 18:35:02.428636       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-8510, name kube-root-ca.crt, uid 1254779a-6937-4c62-98d6-07ff4ba1d0b4, event type delete
I0911 18:35:02.433086       1 publisher.go:186] Finished syncing namespace "azuredisk-8510" (4.400327ms)
I0911 18:35:02.436288       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.ConfigMap total 20 items received
I0911 18:35:02.441898       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-8510, name default-token-w7pzv, uid 27feb2d6-e3a5-4231-a3a4-17a8b06d7a18, event type delete
E0911 18:35:02.459034       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-8510/default: secrets "default-token-d9hhf" is forbidden: unable to create new content in namespace azuredisk-8510 because it is being terminated
I0911 18:35:02.487817       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8510, name azuredisk-volume-tester-z7gh7.16a3d82c6948a01a, uid 8eaa7146-4340-4c55-bf8a-5825de8fa899, event type delete
I0911 18:35:02.491192       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8510, name azuredisk-volume-tester-z7gh7.16a3d82edb83a47e, uid f5a0ec4b-7e4e-4a16-98c7-8b0520305021, event type delete
I0911 18:35:02.499399       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8510, name azuredisk-volume-tester-z7gh7.16a3d8314f0cee31, uid caeb0312-1d54-41b2-85c6-a183a9668573, event type delete
I0911 18:35:02.502791       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8510, name azuredisk-volume-tester-z7gh7.16a3d833c03931bb, uid 6cda6981-ed47-415c-8888-461b921a9eb4, event type delete
I0911 18:35:02.506994       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8510, name azuredisk-volume-tester-z7gh7.16a3d8345178c265, uid e2968585-6ac2-4e73-ada7-e5cee58b95ec, event type delete
I0911 18:35:02.510604       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-8510, name azuredisk-volume-tester-z7gh7.16a3d834591feacd, uid 77b9e4a4-bbcf-4239-b4fb-72b4f708e871, event type delete
... skipping 232 lines ...
I0911 18:35:37.241010       1 pv_controller.go:1108] reclaimVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: policy is Delete
I0911 18:35:37.241071       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]]
I0911 18:35:37.241079       1 pv_controller.go:1763] operation "delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]" is already running, skipping
I0911 18:35:37.240853       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011] started
I0911 18:35:37.243345       1 pv_controller.go:1340] isVolumeReleased[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is released
I0911 18:35:37.243362       1 pv_controller.go:1404] doDeleteVolume [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]
I0911 18:35:37.266109       1 pv_controller.go:1259] deletion of volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:35:37.266166       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: set phase Failed
I0911 18:35:37.266175       1 pv_controller.go:858] updating PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: set phase Failed
I0911 18:35:37.270582       1 pv_protection_controller.go:205] Got event on PV pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011
I0911 18:35:37.270598       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" with version 4367
I0911 18:35:37.270633       1 pv_controller.go:879] volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" entered phase "Failed"
I0911 18:35:37.270643       1 pv_controller.go:901] volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" changed status to "Failed": disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
E0911 18:35:37.270676       1 goroutinemap.go:150] Operation for "delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]" failed. No retries permitted until 2021-09-11 18:35:37.770662862 +0000 UTC m=+2020.507784523 (durationBeforeRetry 500ms). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:35:37.270725       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" with version 4367
I0911 18:35:37.270754       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase: Failed, bound to: "azuredisk-5561/pvc-wz6tl (uid: 8a42b2ac-8cf5-43d3-bcca-27c5fbf70011)", boundByController: true
I0911 18:35:37.270793       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is bound to claim azuredisk-5561/pvc-wz6tl
I0911 18:35:37.270816       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: claim azuredisk-5561/pvc-wz6tl not found
I0911 18:35:37.270823       1 pv_controller.go:1108] reclaimVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: policy is Delete
I0911 18:35:37.270835       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]]
I0911 18:35:37.270843       1 pv_controller.go:1765] operation "delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]" postponed due to exponential backoff
I0911 18:35:37.270885       1 event.go:291] "Event occurred" object="pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" kind="PersistentVolume" apiVersion="v1" type="Warning" reason="VolumeFailedDelete" message="disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted"
... skipping 6 lines ...
I0911 18:35:41.833200       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: volume is bound to claim azuredisk-5561/pvc-tkg9v
I0911 18:35:41.833222       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: claim azuredisk-5561/pvc-tkg9v found: phase: Bound, bound to: "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b", bindCompleted: true, boundByController: true
I0911 18:35:41.833259       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: all is bound
I0911 18:35:41.833272       1 pv_controller.go:858] updating PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: set phase Bound
I0911 18:35:41.833283       1 pv_controller.go:861] updating PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: phase Bound already set
I0911 18:35:41.833361       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" with version 4367
I0911 18:35:41.833405       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase: Failed, bound to: "azuredisk-5561/pvc-wz6tl (uid: 8a42b2ac-8cf5-43d3-bcca-27c5fbf70011)", boundByController: true
I0911 18:35:41.833431       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is bound to claim azuredisk-5561/pvc-wz6tl
I0911 18:35:41.833489       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: claim azuredisk-5561/pvc-wz6tl not found
I0911 18:35:41.833521       1 pv_controller.go:1108] reclaimVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: policy is Delete
I0911 18:35:41.833538       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]]
I0911 18:35:41.833604       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011] started
I0911 18:35:41.833710       1 pv_controller_base.go:640] storeObjectUpdate updating claim "azuredisk-5561/pvc-tkg9v" with version 4265
... skipping 11 lines ...
I0911 18:35:41.835161       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5561/pvc-tkg9v] status: phase Bound already set
I0911 18:35:41.835259       1 pv_controller.go:1038] volume "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b" bound to claim "azuredisk-5561/pvc-tkg9v"
I0911 18:35:41.835364       1 pv_controller.go:1039] volume "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b" status after binding: phase: Bound, bound to: "azuredisk-5561/pvc-tkg9v (uid: 91b2ff27-c889-4f7e-b13c-bc942a9d435b)", boundByController: true
I0911 18:35:41.835475       1 pv_controller.go:1040] claim "azuredisk-5561/pvc-tkg9v" status after binding: phase: Bound, bound to: "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b", bindCompleted: true, boundByController: true
I0911 18:35:41.848493       1 pv_controller.go:1340] isVolumeReleased[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is released
I0911 18:35:41.848516       1 pv_controller.go:1404] doDeleteVolume [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]
I0911 18:35:41.884095       1 pv_controller.go:1259] deletion of volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" failed: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:35:41.884121       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: set phase Failed
I0911 18:35:41.884133       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase Failed already set
E0911 18:35:41.884194       1 goroutinemap.go:150] Operation for "delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]" failed. No retries permitted until 2021-09-11 18:35:42.884142463 +0000 UTC m=+2025.621264124 (durationBeforeRetry 1s). Error: disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) already attached to node(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-pxbpw), could not be deleted
I0911 18:35:43.524868       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.VolumeAttachment total 4 items received
I0911 18:35:44.581617       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 2 items received
I0911 18:35:44.811720       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="84.001µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:56660" resp=200
I0911 18:35:45.373318       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-pxbpw"
I0911 18:35:45.373585       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b to the node "capz-4tyuov-md-0-pxbpw" mounted false
I0911 18:35:45.373716       1 actual_state_of_world.go:393] SetVolumeMountedByNode volume kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 to the node "capz-4tyuov-md-0-pxbpw" mounted false
... skipping 46 lines ...
I0911 18:35:56.834689       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: volume is bound to claim azuredisk-5561/pvc-tkg9v
I0911 18:35:56.834705       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: claim azuredisk-5561/pvc-tkg9v found: phase: Bound, bound to: "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b", bindCompleted: true, boundByController: true
I0911 18:35:56.834721       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: all is bound
I0911 18:35:56.834729       1 pv_controller.go:858] updating PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: set phase Bound
I0911 18:35:56.834738       1 pv_controller.go:861] updating PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: phase Bound already set
I0911 18:35:56.834752       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" with version 4367
I0911 18:35:56.834775       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase: Failed, bound to: "azuredisk-5561/pvc-wz6tl (uid: 8a42b2ac-8cf5-43d3-bcca-27c5fbf70011)", boundByController: true
I0911 18:35:56.834816       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is bound to claim azuredisk-5561/pvc-wz6tl
I0911 18:35:56.834836       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: claim azuredisk-5561/pvc-wz6tl not found
I0911 18:35:56.834848       1 pv_controller.go:1108] reclaimVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: policy is Delete
I0911 18:35:56.834886       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]]
I0911 18:35:56.834944       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011] started
I0911 18:35:56.841935       1 pv_controller.go:1340] isVolumeReleased[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is released
I0911 18:35:56.841958       1 pv_controller.go:1404] doDeleteVolume [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]
I0911 18:35:56.841989       1 pv_controller.go:1259] deletion of volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" failed: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) since it's in attaching or detaching state
I0911 18:35:56.841997       1 pv_controller.go:887] updating updateVolumePhaseWithEvent[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: set phase Failed
I0911 18:35:56.842004       1 pv_controller.go:890] updating updateVolumePhaseWithEvent[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase Failed already set
E0911 18:35:56.842026       1 goroutinemap.go:150] Operation for "delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]" failed. No retries permitted until 2021-09-11 18:35:58.842010063 +0000 UTC m=+2041.579131724 (durationBeforeRetry 2s). Error: failed to delete disk(/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) since it's in attaching or detaching state
I0911 18:35:58.623369       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.CSINode total 4 items received
I0911 18:35:58.966249       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-sgwmt ReadyCondition updated. Updating timestamp.
I0911 18:36:00.833674       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-pxbpw) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) returned with <nil>
I0911 18:36:00.833740       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011) succeeded
I0911 18:36:00.833752       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011 was detached from node:capz-4tyuov-md-0-pxbpw
I0911 18:36:00.833797       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011") on node "capz-4tyuov-md-0-pxbpw" 
... skipping 22 lines ...
I0911 18:36:11.835130       1 pv_controller.go:858] updating PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: set phase Bound
I0911 18:36:11.835137       1 pv_controller.go:997] updating PersistentVolumeClaim[azuredisk-5561/pvc-tkg9v]: already bound to "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b"
I0911 18:36:11.835138       1 pv_controller.go:861] updating PersistentVolume[pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b]: phase Bound already set
I0911 18:36:11.835146       1 pv_controller.go:751] updating PersistentVolumeClaim[azuredisk-5561/pvc-tkg9v] status: set phase Bound
I0911 18:36:11.835155       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" with version 4367
I0911 18:36:11.835171       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-5561/pvc-tkg9v] status: phase Bound already set
I0911 18:36:11.835174       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase: Failed, bound to: "azuredisk-5561/pvc-wz6tl (uid: 8a42b2ac-8cf5-43d3-bcca-27c5fbf70011)", boundByController: true
I0911 18:36:11.835183       1 pv_controller.go:1038] volume "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b" bound to claim "azuredisk-5561/pvc-tkg9v"
I0911 18:36:11.835194       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is bound to claim azuredisk-5561/pvc-wz6tl
I0911 18:36:11.835200       1 pv_controller.go:1039] volume "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b" status after binding: phase: Bound, bound to: "azuredisk-5561/pvc-tkg9v (uid: 91b2ff27-c889-4f7e-b13c-bc942a9d435b)", boundByController: true
I0911 18:36:11.835211       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: claim azuredisk-5561/pvc-wz6tl not found
I0911 18:36:11.835218       1 pv_controller.go:1108] reclaimVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: policy is Delete
I0911 18:36:11.835221       1 pv_controller.go:1040] claim "azuredisk-5561/pvc-tkg9v" status after binding: phase: Bound, bound to: "pvc-91b2ff27-c889-4f7e-b13c-bc942a9d435b", bindCompleted: true, boundByController: true
... skipping 12 lines ...
I0911 18:36:17.170573       1 azure_managedDiskController.go:249] azureDisk - deleted a managed disk: /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011
I0911 18:36:17.170742       1 pv_controller.go:1435] volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" deleted
I0911 18:36:17.170835       1 pv_controller.go:1283] deleteVolumeOperation [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: success
I0911 18:36:17.182990       1 pv_protection_controller.go:205] Got event on PV pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011
I0911 18:36:17.183020       1 pv_protection_controller.go:125] Processing PV pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011
I0911 18:36:17.183410       1 pv_controller_base.go:640] storeObjectUpdate updating volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" with version 4428
I0911 18:36:17.183485       1 pv_controller.go:543] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: phase: Failed, bound to: "azuredisk-5561/pvc-wz6tl (uid: 8a42b2ac-8cf5-43d3-bcca-27c5fbf70011)", boundByController: true
I0911 18:36:17.183535       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: volume is bound to claim azuredisk-5561/pvc-wz6tl
I0911 18:36:17.183578       1 pv_controller.go:612] synchronizing PersistentVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: claim azuredisk-5561/pvc-wz6tl not found
I0911 18:36:17.183605       1 pv_controller.go:1108] reclaimVolume[pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011]: policy is Delete
I0911 18:36:17.183639       1 pv_controller.go:1752] scheduleOperation[delete-pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011[e222708f-ac62-4629-bed7-b849f714b044]]
I0911 18:36:17.183682       1 pv_controller.go:1231] deleteVolumeOperation [pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011] started
I0911 18:36:17.194581       1 pv_controller.go:1243] Volume "pvc-8a42b2ac-8cf5-43d3-bcca-27c5fbf70011" is already being deleted
... skipping 166 lines ...
I0911 18:36:30.771166       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa StorageAccountType:StandardSSD_LRS Size:10
I0911 18:36:30.772010       1 azure_managedDiskController.go:86] azureDisk - creating new managed Name:capz-4tyuov-dynamic-pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e StorageAccountType:Premium_LRS Size:10
I0911 18:36:31.965640       1 gc_controller.go:161] GC'ing orphaned
I0911 18:36:31.965669       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:36:32.982762       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-5561
I0911 18:36:32.999635       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-5561, name default-token-bqmmf, uid c0157e87-ffc1-4efc-b855-f5835cd71c24, event type delete
E0911 18:36:33.014528       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-5561/default: secrets "default-token-tplmv" is forbidden: unable to create new content in namespace azuredisk-5561 because it is being terminated
I0911 18:36:33.039047       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-pcw6q.16a3d849c4e8a80e, uid 40554bc3-d550-4f59-9610-5b650e67a5ff, event type delete
I0911 18:36:33.045768       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-pcw6q.16a3d84c49577c60, uid ab1888c0-9584-4ff4-a4f5-aa4d0143c0a4, event type delete
I0911 18:36:33.049592       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-pcw6q.16a3d84de6502b80, uid 86abf208-e7a9-43bb-acb4-1e0385a6254e, event type delete
I0911 18:36:33.054857       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-pcw6q.16a3d84de650d9bb, uid 328f7f2a-a1dc-4271-b9db-3ddd2a7a554a, event type delete
I0911 18:36:33.059168       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-pcw6q.16a3d84ed4059bdb, uid b7b22c34-064e-46a9-b8c8-f11baa4d51c0, event type delete
I0911 18:36:33.062497       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-5561, name azuredisk-volume-tester-pcw6q.16a3d8519cee1e36, uid b80bebd8-ebb3-4a44-afc3-39ab010e9117, event type delete
... skipping 200 lines ...
I0911 18:36:33.908156       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" lun 0 to node "capz-4tyuov-md-0-sgwmt".
I0911 18:36:33.908290       1 azure_controller_standard.go:93] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-sgwmt) - attach disk(capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa) with DiskEncryptionSetID()
I0911 18:36:33.928652       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-4376
I0911 18:36:33.981592       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-4376, name default-token-snzml, uid 7fa9a7fe-07ad-442f-98e2-106f96d0862f, event type delete
I0911 18:36:33.996756       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-4376, name default, uid ce2bc8f0-afdb-454c-b598-c7cfa3658f5e, event type delete
I0911 18:36:33.997444       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (3µs)
E0911 18:36:34.002776       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-4376/default: secrets "default-token-6rsjb" is forbidden: unable to create new content in namespace azuredisk-4376 because it is being terminated
I0911 18:36:34.002975       1 tokens_controller.go:252] syncServiceAccount(azuredisk-4376/default), service account deleted, removing tokens
I0911 18:36:34.038759       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-4376, name kube-root-ca.crt, uid 3fbb8c2e-baec-4c8f-afbd-bc42667538fc, event type delete
I0911 18:36:34.040939       1 publisher.go:186] Finished syncing namespace "azuredisk-4376" (2.136214ms)
I0911 18:36:34.116372       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-4376" (3.3µs)
I0911 18:36:34.117329       1 namespaced_resources_deleter.go:554] namespace controller - deleteAllContent - namespace: azuredisk-4376, estimate: 0, errors: <nil>
I0911 18:36:34.128844       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4376" (202.483549ms)
... skipping 577 lines ...
I0911 18:38:11.843109       1 pv_controller.go:1039] volume "pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e" status after binding: phase: Bound, bound to: "azuredisk-953/pvc-lxnkg (uid: b88d7af7-32b2-47cc-8af2-787ff20a4b6e)", boundByController: true
I0911 18:38:11.843125       1 pv_controller.go:1040] claim "azuredisk-953/pvc-lxnkg" status after binding: phase: Bound, bound to: "pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e", bindCompleted: true, boundByController: true
I0911 18:38:11.969165       1 gc_controller.go:161] GC'ing orphaned
I0911 18:38:11.969192       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:38:14.812184       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="57.4µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:58130" resp=200
I0911 18:38:15.527320       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.HorizontalPodAutoscaler total 5 items received
E0911 18:38:15.749733       1 azure_controller_standard.go:102] azureDisk - attach disk(capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1) on rg(capz-4tyuov) vm(capz-4tyuov-md-0-sgwmt) failed, err: &{true -1 0001-01-01 00:00:00 +0000 UTC Code="StorageFailure" Message="Error while creating storage object https://md-hdd-sfgppb4l5pp5.z32.blob.storage.azure.net/b0xt5b52fn5l/abcd  Target: '/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1'."}
I0911 18:38:15.749780       1 azure_controller_standard.go:111] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-sgwmt) - attach disk(capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1) returned with &{true -1 0001-01-01 00:00:00 +0000 UTC Code="StorageFailure" Message="Error while creating storage object https://md-hdd-sfgppb4l5pp5.z32.blob.storage.azure.net/b0xt5b52fn5l/abcd  Target: '/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1'."}
I0911 18:38:15.749947       1 attacher.go:91] Attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" to instance "capz-4tyuov-md-0-sgwmt" failed with Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="StorageFailure" Message="Error while creating storage object https://md-hdd-sfgppb4l5pp5.z32.blob.storage.azure.net/b0xt5b52fn5l/abcd  Target: '/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1'."
E0911 18:38:15.750178       1 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1 podName: nodeName:}" failed. No retries permitted until 2021-09-11 18:38:16.250148558 +0000 UTC m=+2178.987270319 (durationBeforeRetry 500ms). Error: AttachVolume.Attach failed for volume "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1") from node "capz-4tyuov-md-0-sgwmt" : Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="StorageFailure" Message="Error while creating storage object https://md-hdd-sfgppb4l5pp5.z32.blob.storage.azure.net/b0xt5b52fn5l/abcd  Target: '/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1'."
I0911 18:38:15.750350       1 event.go:291] "Event occurred" object="azuredisk-953/azuredisk-volume-tester-hr88k" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="AttachVolume.Attach failed for volume \"pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1\" : Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code=\"StorageFailure\" Message=\"Error while creating storage object https://md-hdd-sfgppb4l5pp5.z32.blob.storage.azure.net/b0xt5b52fn5l/abcd  Target: '/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1'.\""
I0911 18:38:16.274609       1 reconciler.go:295] attacherDetacher.AttachVolume started for volume "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1") from node "capz-4tyuov-md-0-sgwmt" 
I0911 18:38:16.343659       1 azure_controller_common.go:298] azureDisk - find disk: lun 2 name "capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" uri "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1"
I0911 18:38:16.343695       1 attacher.go:82] Attach operation is successful. volume "capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" is already attached to node "capz-4tyuov-md-0-sgwmt" at lun 2.
I0911 18:38:16.343977       1 operation_generator.go:369] AttachVolume.Attach succeeded for volume "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1") from node "capz-4tyuov-md-0-sgwmt" 
I0911 18:38:16.344018       1 actual_state_of_world.go:350] Volume "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" is already added to attachedVolume list to node "capz-4tyuov-md-0-sgwmt", update device path "2"
I0911 18:38:16.344040       1 actual_state_of_world.go:507] Report volume "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1" as attached to node "capz-4tyuov-md-0-sgwmt"
... skipping 1269 lines ...
I0911 18:41:51.978590       1 gc_controller.go:161] GC'ing orphaned
I0911 18:41:51.978619       1 gc_controller.go:224] GC'ing unscheduled pods which are terminating.
I0911 18:41:52.256215       1 resource_quota_controller.go:424] no resource updates from discovery, skipping resource quota sync
I0911 18:41:52.448593       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
I0911 18:41:52.448684       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
I0911 18:41:52.448724       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-4tyuov-md-0-sgwmt"
I0911 18:41:52.448745       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Failed" for node "capz-4tyuov-md-0-sgwmt"
I0911 18:41:53.840807       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.IngressClass total 0 items received
I0911 18:41:54.088599       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 45.064363196s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:54.088744       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 45.064513997s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:54.088764       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 45.064535497s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:54.088986       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 45.064551797s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:54.089077       1 node_lifecycle_controller.go:882] Node capz-4tyuov-md-0-sgwmt is unresponsive as of 2021-09-11 18:41:54.089049738 +0000 UTC m=+2396.826171499. Adding it to the Taint queue.
... skipping 80 lines ...
I0911 18:41:56.857645       1 pv_controller.go:1038] volume "pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" bound to claim "azuredisk-953/pvc-njvrf"
I0911 18:41:56.857661       1 pv_controller.go:1039] volume "pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" status after binding: phase: Bound, bound to: "azuredisk-953/pvc-njvrf (uid: 3d5b4a5a-def7-488a-94d0-b764f36290fa)", boundByController: true
I0911 18:41:56.857676       1 pv_controller.go:1040] claim "azuredisk-953/pvc-njvrf" status after binding: phase: Bound, bound to: "pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa", bindCompleted: true, boundByController: true
I0911 18:41:57.449798       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
I0911 18:41:57.449987       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
I0911 18:41:57.450018       1 azure_instances.go:239] InstanceShutdownByProviderID gets power status "running" for node "capz-4tyuov-md-0-sgwmt"
I0911 18:41:57.450032       1 azure_instances.go:250] InstanceShutdownByProviderID gets provisioning state "Failed" for node "capz-4tyuov-md-0-sgwmt"
I0911 18:41:59.090252       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 50.066016795s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:59.090304       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 50.066075296s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:59.090323       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 50.066094296s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:41:59.090336       1 node_lifecycle_controller.go:1092] node capz-4tyuov-md-0-sgwmt hasn't been updated for 50.066108196s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2021-09-11 18:41:07 +0000 UTC,LastTransitionTime:2021-09-11 18:41:49 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I0911 18:42:02.450549       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
I0911 18:42:02.464632       1 azure_vmss.go:367] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/virtualMachines/capz-4tyuov-md-0-sgwmt), assuming it is managed by availability set: not a vmss instance
... skipping 440 lines ...
I0911 18:42:56.861079       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: volume is bound to claim azuredisk-953/pvc-gmvdp
I0911 18:42:56.861100       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: claim azuredisk-953/pvc-gmvdp found: phase: Bound, bound to: "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1", bindCompleted: true, boundByController: true
I0911 18:42:56.861118       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: all is bound
I0911 18:42:56.861125       1 pv_controller.go:858] updating PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: set phase Bound
I0911 18:42:56.861134       1 pv_controller.go:861] updating PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: phase Bound already set
I0911 18:43:01.143563       1 reflector.go:535] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Watch close - *v1.PartialObjectMetadata total 5 items received
E0911 18:43:04.245259       1 azure_controller_standard.go:175] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa) on rg(capz-4tyuov) vm(capz-4tyuov-md-0-sgwmt) failed, err: &{true -1 0001-01-01 00:00:00 +0000 UTC Code="OperationPreempted" Message="Operation execution has been preempted by a more recent operation."}
I0911 18:43:04.245330       1 azure_controller_standard.go:184] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-sgwmt) - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa) returned with &{true -1 0001-01-01 00:00:00 +0000 UTC Code="OperationPreempted" Message="Operation execution has been preempted by a more recent operation."}
E0911 18:43:04.245373       1 azure_controller_common.go:262] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa) failed, err: Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="OperationPreempted" Message="Operation execution has been preempted by a more recent operation."
E0911 18:43:04.245402       1 attacher.go:279] failed to detach azure disk "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa", err Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="OperationPreempted" Message="Operation execution has been preempted by a more recent operation."
I0911 18:43:04.245417       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa was detached from node:capz-4tyuov-md-0-sgwmt
I0911 18:43:04.245439       1 actual_state_of_world.go:487] Volume "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" is no longer attached to node "capz-4tyuov-md-0-sgwmt"
E0911 18:43:04.245514       1 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa podName: nodeName:}" failed. No retries permitted until 2021-09-11 18:43:04.745497332 +0000 UTC m=+2467.482619093 (durationBeforeRetry 500ms). Error: DetachVolume.Detach failed for volume "pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-3d5b4a5a-def7-488a-94d0-b764f36290fa") on node "capz-4tyuov-md-0-sgwmt" : Retriable: true, RetryAfter: 0s, HTTPStatusCode: -1, RawError: Code="OperationPreempted" Message="Operation execution has been preempted by a more recent operation."
I0911 18:43:04.319624       1 azure_wrap.go:194] Virtual machine "capz-4tyuov-md-0-sgwmt" not found
W0911 18:43:04.319648       1 azure_controller_standard.go:124] azureDisk - cannot find node capz-4tyuov-md-0-sgwmt, skip detaching disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e)
I0911 18:43:04.319668       1 azure_controller_common.go:266] azureDisk - detach disk(, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e) succeeded
I0911 18:43:04.319677       1 attacher.go:282] azureDisk - disk:/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e was detached from node:capz-4tyuov-md-0-sgwmt
I0911 18:43:04.319697       1 operation_generator.go:484] DetachVolume.Detach succeeded for volume "pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e" (UniqueName: "kubernetes.io/azure-disk//subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-b88d7af7-32b2-47cc-8af2-787ff20a4b6e") on node "capz-4tyuov-md-0-sgwmt" 
I0911 18:43:04.379008       1 azure_wrap.go:194] Virtual machine "capz-4tyuov-md-0-sgwmt" not found
... skipping 905 lines ...
I0911 18:45:54.437868       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.StatefulSet total 4 items received
I0911 18:45:54.811460       1 httplog.go:104] "HTTP" verb="GET" URI="/healthz" latency="78.101µs" userAgent="kube-probe/1.22+" audit-ID="" srcIP="127.0.0.1:34332" resp=200
I0911 18:45:56.158926       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0475c0b03239a78, ext:2454789786613, loc:(*time.Location)(0x7504dc0)}}
I0911 18:45:56.159058       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0475c39097af109, ext:2638896173802, loc:(*time.Location)(0x7504dc0)}}
I0911 18:45:56.159080       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-4tyuov-md-0-t7w5j], creating 1
I0911 18:45:56.159541       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-t7w5j"
W0911 18:45:56.159563       1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-4tyuov-md-0-t7w5j" does not exist
I0911 18:45:56.159584       1 controller.go:682] Ignoring node capz-4tyuov-md-0-t7w5j with Ready condition status False
I0911 18:45:56.159602       1 controller.go:269] Triggering nodeSync
I0911 18:45:56.159611       1 controller.go:288] nodeSync has been triggered
I0911 18:45:56.159619       1 controller.go:765] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0911 18:45:56.159626       1 controller.go:779] Finished updateLoadBalancerHosts
I0911 18:45:56.159631       1 controller.go:720] It took 1.4301e-05 seconds to finish nodeSyncInternal
... skipping 673 lines ...
I0911 18:46:56.873209       1 pv_controller.go:578] synchronizing PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: volume is bound to claim azuredisk-953/pvc-gmvdp
I0911 18:46:56.873281       1 pv_controller.go:620] synchronizing PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: claim azuredisk-953/pvc-gmvdp found: phase: Bound, bound to: "pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1", bindCompleted: true, boundByController: true
I0911 18:46:56.873299       1 pv_controller.go:687] synchronizing PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: all is bound
I0911 18:46:56.873312       1 pv_controller.go:858] updating PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: set phase Bound
I0911 18:46:56.873321       1 pv_controller.go:861] updating PersistentVolume[pvc-0c475e9e-2673-46bf-8d58-7850a24b85c1]: phase Bound already set
I0911 18:46:57.385288       1 reflector.go:535] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Deployment total 4 items received
I0911 18:46:59.149870       1 node_lifecycle_controller.go:1039] ReadyCondition for Node capz-4tyuov-md-0-t7w5j transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2021-09-11 18:46:06 +0000 UTC,LastTransitionTime:2021-09-11 18:45:56 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2021-09-11 18:46:56 +0000 UTC,LastTransitionTime:2021-09-11 18:46:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0911 18:46:59.149960       1 node_lifecycle_controller.go:1047] Node capz-4tyuov-md-0-t7w5j ReadyCondition updated. Updating timestamp.
I0911 18:46:59.165720       1 node_lifecycle_controller.go:893] Node capz-4tyuov-md-0-t7w5j is healthy again, removing all taints
I0911 18:46:59.166341       1 taint_manager.go:435] "Noticed node update" node={nodeName:capz-4tyuov-md-0-t7w5j}
I0911 18:46:59.167564       1 taint_manager.go:440] "Updating known taints on node" node="capz-4tyuov-md-0-t7w5j" taints=[]
I0911 18:46:59.167591       1 taint_manager.go:461] "All taints were removed from the node. Cancelling all evictions..." node="capz-4tyuov-md-0-t7w5j"
I0911 18:46:59.166958       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-4tyuov-md-0-t7w5j"
... skipping 1834 lines ...
I0911 18:52:09.358437       1 pv_controller.go:809] updating PersistentVolumeClaim[azuredisk-552/pvc-sphp5] status: phase Bound already set
I0911 18:52:09.358595       1 pv_controller.go:1038] volume "pvc-e4bd86c6-e6da-45e1-8f07-edb52a9be67a" bound to claim "azuredisk-552/pvc-sphp5"
I0911 18:52:09.358735       1 pv_controller.go:1039] volume "pvc-e4bd86c6-e6da-45e1-8f07-edb52a9be67a" status after binding: phase: Bound, bound to: "azuredisk-552/pvc-sphp5 (uid: e4bd86c6-e6da-45e1-8f07-edb52a9be67a)", boundByController: true
I0911 18:52:09.358762       1 pv_controller.go:1040] claim "azuredisk-552/pvc-sphp5" status after binding: phase: Bound, bound to: "pvc-e4bd86c6-e6da-45e1-8f07-edb52a9be67a", bindCompleted: true, boundByController: true
I0911 18:52:09.927610       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-3033
I0911 18:52:09.985035       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-3033, name default-token-jx27n, uid 1685b765-2382-4489-899a-6f66c30f4c1d, event type delete
E0911 18:52:10.002587       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-3033/default: secrets "default-token-f5226" is forbidden: unable to create new content in namespace azuredisk-3033 because it is being terminated
I0911 18:52:10.006140       1 disruption.go:427] updatePod called on pod "azuredisk-volume-tester-zrk98"
I0911 18:52:10.006674       1 disruption.go:490] No PodDisruptionBudgets found for pod azuredisk-volume-tester-zrk98, PodDisruptionBudget controller will avoid syncing.
I0911 18:52:10.006847       1 disruption.go:430] No matching pdb for pod "azuredisk-volume-tester-zrk98"
I0911 18:52:10.007927       1 taint_manager.go:400] "Noticed pod update" pod="azuredisk-552/azuredisk-volume-tester-zrk98"
I0911 18:52:10.048272       1 tokens_controller.go:252] syncServiceAccount(azuredisk-3033/default), service account deleted, removing tokens
I0911 18:52:10.048327       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-3033, name default, uid a0031cad-fa42-4277-a0fa-4dcf40ab366e, event type delete
... skipping 492 lines ...
I0911 18:53:48.292774       1 azure_controller_common.go:199] Trying to attach volume "/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-619eadec-9383-4cd3-8ce0-c23dd2d9a045" lun 0 to node "capz-4tyuov-md-0-t7w5j".
I0911 18:53:48.292823       1 azure_controller_standard.go:93] azureDisk - update(capz-4tyuov): vm(capz-4tyuov-md-0-t7w5j) - attach disk(capz-4tyuov-dynamic-pvc-619eadec-9383-4cd3-8ce0-c23dd2d9a045, /subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-4tyuov/providers/Microsoft.Compute/disks/capz-4tyuov-dynamic-pvc-619eadec-9383-4cd3-8ce0-c23dd2d9a045) with DiskEncryptionSetID()
I0911 18:53:49.314019       1 namespaced_resources_deleter.go:500] namespace controller - deleteAllContent - namespace: azuredisk-552
I0911 18:53:49.346700       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=configmaps, namespace azuredisk-552, name kube-root-ca.crt, uid b763311d-f101-4145-8075-ba652ed88ccf, event type delete
I0911 18:53:49.350244       1 publisher.go:186] Finished syncing namespace "azuredisk-552" (3.227424ms)
I0911 18:53:49.371926       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=secrets, namespace azuredisk-552, name default-token-bqmrt, uid 34ae1db0-0cc6-4f42-ad52-ee11d2dfbec4, event type delete
E0911 18:53:49.385579       1 tokens_controller.go:262] error synchronizing serviceaccount azuredisk-552/default: secrets "default-token-fgk24" is forbidden: unable to create new content in namespace azuredisk-552 because it is being terminated
I0911 18:53:49.422808       1 tokens_controller.go:252] syncServiceAccount(azuredisk-552/default), service account deleted, removing tokens
I0911 18:53:49.422844       1 resource_quota_monitor.go:355] QuotaMonitor process object: /v1, Resource=serviceaccounts, namespace azuredisk-552, name default, uid d40b5578-e851-4e02-bf73-bd2cc471dea6, event type delete
I0911 18:53:49.422870       1 serviceaccounts_controller.go:188] Finished syncing namespace "azuredisk-552" (1.7µs)
I0911 18:53:49.436454       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-552, name azuredisk-volume-tester-zrk98.16a3d939450899be, uid 869cf9b1-dead-40cf-adda-48a4cfb5382c, event type delete
I0911 18:53:49.440590       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-552, name azuredisk-volume-tester-zrk98.16a3d93bc227a983, uid 0946c6a3-be5a-4a9f-8994-50b4ab9156bc, event type delete
I0911 18:53:49.444174       1 resource_quota_monitor.go:355] QuotaMonitor process object: events.k8s.io/v1, Resource=events, namespace azuredisk-552, name azuredisk-volume-tester-zrk98.16a3d93de7985481, uid 87dab5fe-c5b0-4e39-8089-36602f0a8f53, event type delete
... skipping 416 lines ...

JUnit report was created: /logs/artifacts/junit_01.xml


Summarizing 1 Failure:

[Fail] Dynamic Provisioning [single-az] [It] should create a pod with multiple volumes [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows] 
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/testsuites/testsuites.go:730

Ran 12 of 53 Specs in 2982.396 seconds
FAIL! -- 11 Passed | 1 Failed | 0 Pending | 41 Skipped
You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce (a small number of) breaking changes.
To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md
To comment, chime in at https://github.com/onsi/ginkgo/issues/711

... skipping 2 lines ...
  If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711
  Learn more at: https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md#removed-custom-reporters

To silence deprecations that can be silenced set the following environment variable:
  ACK_GINKGO_DEPRECATIONS=1.16.4

--- FAIL: TestE2E (2982.41s)
FAIL
FAIL	sigs.k8s.io/azuredisk-csi-driver/test/e2e	2982.452s
FAIL
make: *** [Makefile:254: e2e-test] Error 1
================ DUMPING LOGS FOR MANAGEMENT CLUSTER ================
Exported logs for cluster "capz" to:
/logs/artifacts/management-cluster
================ DUMPING LOGS FOR WORKLOAD CLUSTER ================
Deploying log-dump-daemonset
daemonset.apps/log-dump-node created
... skipping 24 lines ...