This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 12 succeeded
Started2022-09-04 23:43
Elapsed51m55s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 12 Passed Tests

Show 47 Skipped Tests

Error lines from build-log.txt

... skipping 709 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 134 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-53p2mm-kubeconfig; do sleep 1; done"
capz-53p2mm-kubeconfig                 cluster.x-k8s.io/secret   1      1s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-53p2mm-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-53p2mm-control-plane-n5vrz   NotReady   control-plane   6s    v1.26.0-alpha.0.378+bcea98234f0fdc
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-53p2mm-control-plane-n5vrz condition met
node/capz-53p2mm-mp-0000000 condition met
... skipping 62 lines ...
Dynamic Provisioning [single-az] 
  should create a volume on demand with mount options [kubernetes.io/azure-disk] [disk.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/dynamic_provisioning_test.go:101
STEP: Creating a kubernetes client
Sep  5 00:01:59.932: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
Sep  5 00:02:00.617: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/05 00:02:01 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/05 00:02:01 Check successfully
Sep  5 00:02:01.540: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 00:02:01.855: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-pbkck" in namespace "azuredisk-8081" to be "Succeeded or Failed"
Sep  5 00:02:01.957: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 101.497922ms
Sep  5 00:02:04.060: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204766155s
Sep  5 00:02:06.164: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308409954s
Sep  5 00:02:08.266: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41112031s
Sep  5 00:02:10.369: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514192956s
Sep  5 00:02:12.472: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617099535s
... skipping 4 lines ...
Sep  5 00:02:22.987: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 21.131308638s
Sep  5 00:02:25.091: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 23.235530656s
Sep  5 00:02:27.201: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 25.345308856s
Sep  5 00:02:29.310: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Pending", Reason="", readiness=false. Elapsed: 27.455158568s
Sep  5 00:02:31.420: INFO: Pod "azuredisk-volume-tester-pbkck": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.56514167s
STEP: Saw pod success
Sep  5 00:02:31.420: INFO: Pod "azuredisk-volume-tester-pbkck" satisfied condition "Succeeded or Failed"
Sep  5 00:02:31.421: INFO: deleting Pod "azuredisk-8081"/"azuredisk-volume-tester-pbkck"
Sep  5 00:02:31.537: INFO: Pod azuredisk-volume-tester-pbkck has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-pbkck in namespace azuredisk-8081
STEP: validating provisioned PV
STEP: checking the PV
... skipping 97 lines ...
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod has 'FailedMount' event
Sep  5 00:03:31.104: INFO: deleting Pod "azuredisk-5466"/"azuredisk-volume-tester-6vbx9"
Sep  5 00:03:31.208: INFO: Error getting logs for pod azuredisk-volume-tester-6vbx9: the server rejected our request for an unknown reason (get pods azuredisk-volume-tester-6vbx9)
STEP: Deleting pod azuredisk-volume-tester-6vbx9 in namespace azuredisk-5466
STEP: validating provisioned PV
STEP: checking the PV
Sep  5 00:03:31.517: INFO: deleting PVC "azuredisk-5466"/"pvc-sthml"
Sep  5 00:03:31.517: INFO: Deleting PersistentVolumeClaim "pvc-sthml"
STEP: waiting for claim's PV "pvc-bf254743-7f24-4723-bfba-c14b275e9e62" to be deleted
... skipping 57 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 00:05:56.706: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-4zdwp" in namespace "azuredisk-2790" to be "Succeeded or Failed"
Sep  5 00:05:56.811: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 105.04331ms
Sep  5 00:05:58.914: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207487631s
Sep  5 00:06:01.017: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.31056402s
Sep  5 00:06:03.120: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.414121117s
Sep  5 00:06:05.224: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.517730523s
Sep  5 00:06:07.327: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 10.621071034s
... skipping 4 lines ...
Sep  5 00:06:17.842: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 21.135852545s
Sep  5 00:06:19.946: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 23.2395409s
Sep  5 00:06:22.055: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 25.348383707s
Sep  5 00:06:24.162: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Pending", Reason="", readiness=false. Elapsed: 27.455657448s
Sep  5 00:06:26.269: INFO: Pod "azuredisk-volume-tester-4zdwp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.563187818s
STEP: Saw pod success
Sep  5 00:06:26.269: INFO: Pod "azuredisk-volume-tester-4zdwp" satisfied condition "Succeeded or Failed"
Sep  5 00:06:26.269: INFO: deleting Pod "azuredisk-2790"/"azuredisk-volume-tester-4zdwp"
Sep  5 00:06:26.396: INFO: Pod azuredisk-volume-tester-4zdwp has the following logs: e2e-test

STEP: Deleting pod azuredisk-volume-tester-4zdwp in namespace azuredisk-2790
STEP: validating provisioned PV
STEP: checking the PV
... skipping 39 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with an error
Sep  5 00:07:04.585: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-k6mr4" in namespace "azuredisk-5356" to be "Error status code"
Sep  5 00:07:04.688: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.537066ms
Sep  5 00:07:06.791: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205568228s
Sep  5 00:07:08.894: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30843251s
Sep  5 00:07:10.996: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410961441s
Sep  5 00:07:13.098: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.513236345s
Sep  5 00:07:15.202: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.617302247s
Sep  5 00:07:17.305: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.719703608s
Sep  5 00:07:19.408: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 14.822392953s
Sep  5 00:07:21.511: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Pending", Reason="", readiness=false. Elapsed: 16.926127685s
Sep  5 00:07:23.619: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Running", Reason="", readiness=true. Elapsed: 19.033659706s
Sep  5 00:07:25.727: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Running", Reason="", readiness=false. Elapsed: 21.141736324s
Sep  5 00:07:27.835: INFO: Pod "azuredisk-volume-tester-k6mr4": Phase="Failed", Reason="", readiness=false. Elapsed: 23.249582573s
STEP: Saw pod failure
Sep  5 00:07:27.835: INFO: Pod "azuredisk-volume-tester-k6mr4" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  5 00:07:27.940: INFO: deleting Pod "azuredisk-5356"/"azuredisk-volume-tester-k6mr4"
Sep  5 00:07:28.050: INFO: Pod azuredisk-volume-tester-k6mr4 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azuredisk-volume-tester-k6mr4 in namespace azuredisk-5356
STEP: validating provisioned PV
... skipping 381 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 00:15:39.243: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vgvtk" in namespace "azuredisk-59" to be "Succeeded or Failed"
Sep  5 00:15:39.348: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 105.122054ms
Sep  5 00:15:41.451: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20803492s
Sep  5 00:15:43.559: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316226857s
Sep  5 00:15:45.666: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423750341s
Sep  5 00:15:47.775: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 8.532341959s
Sep  5 00:15:49.883: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 10.640010496s
... skipping 8 lines ...
Sep  5 00:16:08.856: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 29.613383934s
Sep  5 00:16:10.963: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 31.720636179s
Sep  5 00:16:13.071: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 33.828094759s
Sep  5 00:16:15.181: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Pending", Reason="", readiness=false. Elapsed: 35.937933098s
Sep  5 00:16:17.289: INFO: Pod "azuredisk-volume-tester-vgvtk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 38.046130998s
STEP: Saw pod success
Sep  5 00:16:17.289: INFO: Pod "azuredisk-volume-tester-vgvtk" satisfied condition "Succeeded or Failed"
Sep  5 00:16:17.289: INFO: deleting Pod "azuredisk-59"/"azuredisk-volume-tester-vgvtk"
Sep  5 00:16:17.402: INFO: Pod azuredisk-volume-tester-vgvtk has the following logs: hello world
hello world
hello world

STEP: Deleting pod azuredisk-volume-tester-vgvtk in namespace azuredisk-59
... skipping 68 lines ...
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: setting up the pod
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 00:17:07.247: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-vllhg" in namespace "azuredisk-2546" to be "Succeeded or Failed"
Sep  5 00:17:07.349: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 101.828406ms
Sep  5 00:17:09.452: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.204841462s
Sep  5 00:17:11.555: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.30717553s
Sep  5 00:17:13.658: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.410196358s
Sep  5 00:17:15.760: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51238228s
Sep  5 00:17:17.862: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 10.614629094s
... skipping 5 lines ...
Sep  5 00:17:30.482: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 23.23454145s
Sep  5 00:17:32.585: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Pending", Reason="", readiness=false. Elapsed: 25.337815695s
Sep  5 00:17:34.692: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Running", Reason="", readiness=true. Elapsed: 27.44452406s
Sep  5 00:17:36.799: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Running", Reason="", readiness=false. Elapsed: 29.552024389s
Sep  5 00:17:38.908: INFO: Pod "azuredisk-volume-tester-vllhg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.660223307s
STEP: Saw pod success
Sep  5 00:17:38.908: INFO: Pod "azuredisk-volume-tester-vllhg" satisfied condition "Succeeded or Failed"
Sep  5 00:17:38.908: INFO: deleting Pod "azuredisk-2546"/"azuredisk-volume-tester-vllhg"
Sep  5 00:17:39.028: INFO: Pod azuredisk-volume-tester-vllhg has the following logs: 100+0 records in
100+0 records out
104857600 bytes (100.0MB) copied, 0.068763 seconds, 1.4GB/s
hello world

... skipping 116 lines ...
STEP: creating a PVC
STEP: setting up the StorageClass
STEP: creating a StorageClass 
STEP: setting up the PVC and PV
STEP: creating a PVC
STEP: deploying the pod
STEP: checking that the pod's command exits with no error
Sep  5 00:18:26.203: INFO: Waiting up to 15m0s for pod "azuredisk-volume-tester-v49kf" in namespace "azuredisk-8582" to be "Succeeded or Failed"
Sep  5 00:18:26.305: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 101.918833ms
Sep  5 00:18:28.409: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205303986s
Sep  5 00:18:30.516: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312442939s
Sep  5 00:18:32.624: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420298848s
Sep  5 00:18:34.732: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.528979557s
Sep  5 00:18:36.840: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.636936673s
... skipping 5 lines ...
Sep  5 00:18:49.491: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 23.287346424s
Sep  5 00:18:51.599: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.395122807s
Sep  5 00:18:53.705: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 27.501842098s
Sep  5 00:18:55.814: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.610776793s
Sep  5 00:18:57.922: INFO: Pod "azuredisk-volume-tester-v49kf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 31.718980344s
STEP: Saw pod success
Sep  5 00:18:57.923: INFO: Pod "azuredisk-volume-tester-v49kf" satisfied condition "Succeeded or Failed"
Sep  5 00:18:57.923: INFO: deleting Pod "azuredisk-8582"/"azuredisk-volume-tester-v49kf"
Sep  5 00:18:58.035: INFO: Pod azuredisk-volume-tester-v49kf has the following logs: hello world

STEP: Deleting pod azuredisk-volume-tester-v49kf in namespace azuredisk-8582
STEP: validating provisioned PV
STEP: checking the PV
... skipping 424 lines ...

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
Pre-Provisioned [single-az] 
  should fail when maxShares is invalid [disk.csi.azure.com][windows]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163
STEP: Creating a kubernetes client
Sep  5 00:23:24.132: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azuredisk
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
... skipping 3 lines ...

S [SKIPPING] [0.977 seconds]
Pre-Provisioned
/home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:37
  [single-az]
  /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:69
    should fail when maxShares is invalid [disk.csi.azure.com][windows] [It]
    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/pre_provisioning_test.go:163

    test case is only available for CSI drivers

    /home/prow/go/src/sigs.k8s.io/azuredisk-csi-driver/test/e2e/suite_test.go:304
------------------------------
... skipping 242 lines ...
I0904 23:56:12.120083       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0904 23:56:12.122766       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662335771\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662335770\" (2022-09-04 22:56:10 +0000 UTC to 2023-09-04 22:56:10 +0000 UTC (now=2022-09-04 23:56:12.122740678 +0000 UTC))"
I0904 23:56:12.123124       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662335772\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662335771\" (2022-09-04 22:56:11 +0000 UTC to 2023-09-04 22:56:11 +0000 UTC (now=2022-09-04 23:56:12.123097884 +0000 UTC))"
I0904 23:56:12.123298       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0904 23:56:12.123376       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0904 23:56:12.123848       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0904 23:56:14.948389       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 23:56:14.948588       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0904 23:56:17.124399       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0904 23:56:17.124762       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-53p2mm-control-plane-n5vrz_d731fd05-353c-465c-8cc5-da6781e9b2a3 became leader"
W0904 23:56:17.140277       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0904 23:56:17.140928       1 azure_auth.go:232] Using AzurePublicCloud environment
I0904 23:56:17.140969       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0904 23:56:17.141019       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0904 23:56:17.142268       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:56:17.142120       1 reflector.go:221] Starting reflector *v1.Node (12h33m47.25633021s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:56:17.142724       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:56:17.142187       1 reflector.go:221] Starting reflector *v1.ServiceAccount (12h33m47.25633021s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:56:17.143647       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:56:17.142203       1 shared_informer.go:255] Waiting for caches to sync for tokens
W0904 23:56:17.173867       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 23:56:17.173894       1 controllermanager.go:573] Starting "pv-protection"
I0904 23:56:17.179512       1 controllermanager.go:602] Started "pv-protection"
I0904 23:56:17.179532       1 controllermanager.go:573] Starting "root-ca-cert-publisher"
I0904 23:56:17.179631       1 pv_protection_controller.go:79] Starting PV protection controller
I0904 23:56:17.179647       1 shared_informer.go:255] Waiting for caches to sync for PV protection
I0904 23:56:17.185039       1 controllermanager.go:602] Started "root-ca-cert-publisher"
... skipping 173 lines ...
I0904 23:56:20.429118       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0904 23:56:20.429131       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 23:56:20.429149       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 23:56:20.429171       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 23:56:20.429188       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0904 23:56:20.429202       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0904 23:56:20.429247       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 23:56:20.429265       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 23:56:20.429315       1 controllermanager.go:602] Started "persistentvolume-binder"
I0904 23:56:20.429355       1 controllermanager.go:573] Starting "clusterrole-aggregation"
I0904 23:56:20.429506       1 pv_controller_base.go:318] Starting persistent volume controller
I0904 23:56:20.429521       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0904 23:56:20.578675       1 controllermanager.go:602] Started "clusterrole-aggregation"
I0904 23:56:20.578702       1 controllermanager.go:573] Starting "daemonset"
I0904 23:56:20.578738       1 clusterroleaggregation_controller.go:194] Starting ClusterRoleAggregator
I0904 23:56:20.578746       1 shared_informer.go:255] Waiting for caches to sync for ClusterRoleAggregator
I0904 23:56:20.728606       1 controllermanager.go:602] Started "daemonset"
I0904 23:56:20.728632       1 controllermanager.go:573] Starting "horizontalpodautoscaling"
I0904 23:56:20.728893       1 daemon_controller.go:291] Starting daemon sets controller
I0904 23:56:20.728907       1 shared_informer.go:255] Waiting for caches to sync for daemon sets
I0904 23:56:20.879518       1 topologycache.go:183] Ignoring node capz-53p2mm-control-plane-n5vrz because it is not ready: [{MemoryPressure False 2022-09-04 23:56:00 +0000 UTC 2022-09-04 23:56:00 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 23:56:00 +0000 UTC 2022-09-04 23:56:00 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 23:56:00 +0000 UTC 2022-09-04 23:56:00 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 23:56:00 +0000 UTC 2022-09-04 23:56:00 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0904 23:56:20.879632       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0904 23:56:20.976060       1 request.go:614] Waited for 86.046326ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller
I0904 23:56:21.026001       1 request.go:614] Waited for 93.310697ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/horizontal-pod-autoscaler
I0904 23:56:21.076242       1 request.go:614] Waited for 98.08581ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller/token
I0904 23:56:21.086356       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-53p2mm-control-plane-n5vrz" new_ttl="0s"
I0904 23:56:21.125375       1 request.go:614] Waited for 96.756978ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/horizontal-pod-autoscaler
... skipping 6 lines ...
I0904 23:56:21.278843       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0904 23:56:21.278854       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0904 23:56:21.278865       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 23:56:21.278884       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 23:56:21.278896       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0904 23:56:21.278915       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0904 23:56:21.278949       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 23:56:21.278966       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 23:56:21.279048       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-control-plane-n5vrz"
W0904 23:56:21.279071       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-53p2mm-control-plane-n5vrz" does not exist
I0904 23:56:21.279102       1 controllermanager.go:602] Started "attachdetach"
I0904 23:56:21.279118       1 controllermanager.go:573] Starting "persistentvolume-expander"
I0904 23:56:21.279202       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 23:56:21.279218       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0904 23:56:21.428495       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 23:56:21.428701       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
... skipping 342 lines ...
I0904 23:56:22.388523       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-84994b8c4" need=2 creating=2
I0904 23:56:22.388715       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/coredns-84994b8c4"
I0904 23:56:22.389304       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-84994b8c4 to 2"
I0904 23:56:22.403531       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0904 23:56:22.403754       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 23:56:22.388900504 +0000 UTC m=+11.908473807 - now: 2022-09-04 23:56:22.403746753 +0000 UTC m=+11.923320156]
I0904 23:56:22.409554       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="395.194905ms"
I0904 23:56:22.409583       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 23:56:22.409628       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 23:56:22.409612691 +0000 UTC m=+11.929185994"
I0904 23:56:22.410421       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 23:56:22 +0000 UTC - now: 2022-09-04 23:56:22.41040121 +0000 UTC m=+11.929974613]
I0904 23:56:22.425560       1 request.go:614] Waited for 143.527879ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/endpointslice-controller/token
I0904 23:56:22.430293       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="20.665987ms"
I0904 23:56:22.430350       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0904 23:56:22.430379       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 23:56:22.430311979 +0000 UTC m=+11.949885282"
... skipping 363 lines ...
I0904 23:56:54.490964       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bd6cb19c945069, ext:43999055364, loc:(*time.Location)(0x6f10040)}}
I0904 23:56:54.491021       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/calico-kube-controllers-755ff8d7b5-s9qjq"
I0904 23:56:54.492186       1 controller_utils.go:581] Controller calico-kube-controllers-755ff8d7b5 created pod calico-kube-controllers-755ff8d7b5-s9qjq
I0904 23:56:54.492490       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 23:56:54.492825       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-755ff8d7b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-755ff8d7b5-s9qjq"
I0904 23:56:54.492422       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="16.95263ms"
I0904 23:56:54.492969       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 23:56:54.493021       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 23:56:54.492986922 +0000 UTC m=+44.012560225"
I0904 23:56:54.493397       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 23:56:54 +0000 UTC - now: 2022-09-04 23:56:54.493390942 +0000 UTC m=+44.012964345]
I0904 23:56:54.499332       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-755ff8d7b5-s9qjq" podUID=3f55a9a9-2cb7-4811-9219-d44552e7ebea
I0904 23:56:54.499472       1 disruption.go:494] updatePod called on pod "calico-kube-controllers-755ff8d7b5-s9qjq"
I0904 23:56:54.499640       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-kube-controllers-755ff8d7b5-s9qjq, PodDisruptionBudget controller will avoid syncing.
I0904 23:56:54.499778       1 disruption.go:497] No matching pdb for pod "calico-kube-controllers-755ff8d7b5-s9qjq"
... skipping 223 lines ...
I0904 23:57:14.382913       1 disruption.go:494] updatePod called on pod "coredns-84994b8c4-6rcsl"
I0904 23:57:14.382962       1 disruption.go:570] No PodDisruptionBudgets found for pod coredns-84994b8c4-6rcsl, PodDisruptionBudget controller will avoid syncing.
I0904 23:57:14.382970       1 disruption.go:497] No matching pdb for pod "coredns-84994b8c4-6rcsl"
I0904 23:57:14.382986       1 replica_set.go:457] Pod coredns-84994b8c4-6rcsl updated, objectMeta {Name:coredns-84994b8c4-6rcsl GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:a9cf584d-db07-4e0c-9661-5ccf4935ef66 ResourceVersion:542 Generation:0 CreationTimestamp:2022-09-04 23:56:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:13bfb0fa-7cf5-41a4-b19b-e59204e87b66 Controller:0xc000aa55c0 BlockOwnerDeletion:0xc000aa55c1}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:56:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13bfb0fa-7cf5-41a4-b19b-e59204e87b66\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 23:56:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]} -> {Name:coredns-84994b8c4-6rcsl GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:a9cf584d-db07-4e0c-9661-5ccf4935ef66 ResourceVersion:549 Generation:0 CreationTimestamp:2022-09-04 23:56:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:13bfb0fa-7cf5-41a4-b19b-e59204e87b66 Controller:0xc000cf5dc0 BlockOwnerDeletion:0xc000cf5dc1}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:56:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13bfb0fa-7cf5-41a4-b19b-e59204e87b66\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 23:56:22 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 23:57:14 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0904 23:57:14.383290       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-84994b8c4", timestamp:time.Time{wall:0xc0bd6ca997277ddd, ext:11908037496, loc:(*time.Location)(0x6f10040)}}
I0904 23:57:14.383359       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-84994b8c4" (75.2µs)
I0904 23:57:16.740042       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-53p2mm-control-plane-n5vrz transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 23:56:33 +0000 UTC,LastTransitionTime:2022-09-04 23:56:00 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 23:57:14 +0000 UTC,LastTransitionTime:2022-09-04 23:57:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 23:57:16.740153       1 node_lifecycle_controller.go:1092] Node capz-53p2mm-control-plane-n5vrz ReadyCondition updated. Updating timestamp.
I0904 23:57:16.740180       1 node_lifecycle_controller.go:938] Node capz-53p2mm-control-plane-n5vrz is healthy again, removing all taints
I0904 23:57:16.740199       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 23:57:16.896154       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-control-plane-n5vrz"
I0904 23:57:17.012598       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-control-plane-n5vrz"
I0904 23:57:17.185744       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-control-plane-n5vrz"
... skipping 308 lines ...
I0904 23:58:25.915873       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000000"
I0904 23:58:25.920454       1 controller.go:753] Finished updateLoadBalancerHosts
I0904 23:58:25.921288       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 23:58:25.921763       1 controller.go:686] It took 0.010444732 seconds to finish syncNodes
I0904 23:58:25.921980       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6cc876f43884, ext:135441548319, loc:(*time.Location)(0x6f10040)}}
I0904 23:58:25.922115       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-53p2mm-mp-0000000], creating 1
I0904 23:58:25.920700       1 topologycache.go:183] Ignoring node capz-53p2mm-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-53p2mm-mp-0000000" not found]}]
I0904 23:58:25.923125       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0904 23:58:25.921003       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6cc876e54d41, ext:135440570588, loc:(*time.Location)(0x6f10040)}}
W0904 23:58:25.922012       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-53p2mm-mp-0000000" does not exist
I0904 23:58:25.920606       1 taint_manager.go:471] "Updating known taints on node" node="capz-53p2mm-mp-0000000" taints=[]
I0904 23:58:25.923420       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-53p2mm-mp-0000000], creating 1
I0904 23:58:25.933091       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000000"
I0904 23:58:25.938435       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-53p2mm-mp-0000000" new_ttl="0s"
I0904 23:58:25.944191       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/kube-proxy-j2kdz" podUID=085d2ac2-6142-4cb4-92dd-248a40461aa4
I0904 23:58:25.944541       1 disruption.go:479] addPod called on pod "kube-proxy-j2kdz"
... skipping 107 lines ...
I0904 23:58:33.849977       1 controller.go:690] Syncing backends for all LB services.
I0904 23:58:33.850016       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 23:58:33.850030       1 controller.go:753] Finished updateLoadBalancerHosts
I0904 23:58:33.850035       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 23:58:33.850042       1 controller.go:686] It took 7.4302e-05 seconds to finish syncNodes
I0904 23:58:33.850063       1 topologycache.go:179] Ignoring node capz-53p2mm-control-plane-n5vrz because it has an excluded label
I0904 23:58:33.850094       1 topologycache.go:183] Ignoring node capz-53p2mm-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 23:58:25 +0000 UTC 2022-09-04 23:58:25 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-53p2mm-mp-0000000" not found]}]
I0904 23:58:33.850177       1 topologycache.go:183] Ignoring node capz-53p2mm-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-04 23:58:33 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 23:58:33 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 23:58:33 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 23:58:33 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-53p2mm-mp-0000001" not found]}]
I0904 23:58:33.850206       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0904 23:58:33.851621       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-53p2mm-mp-0000001}
I0904 23:58:33.851639       1 taint_manager.go:471] "Updating known taints on node" node="capz-53p2mm-mp-0000001" taints=[]
I0904 23:58:33.851967       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000001"
W0904 23:58:33.851986       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-53p2mm-mp-0000001" does not exist
I0904 23:58:33.852453       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6cca06f6fddb, ext:141636423442, loc:(*time.Location)(0x6f10040)}}
I0904 23:58:33.853052       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6cca061b2834, ext:141622016463, loc:(*time.Location)(0x6f10040)}}
I0904 23:58:33.853155       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6cca72da077a, ext:143372722865, loc:(*time.Location)(0x6f10040)}}
I0904 23:58:33.853172       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-53p2mm-mp-0000001], creating 1
I0904 23:58:33.856344       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6cca730aaafa, ext:143375910549, loc:(*time.Location)(0x6f10040)}}
I0904 23:58:33.856383       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-53p2mm-mp-0000001], creating 1
... skipping 274 lines ...
I0904 23:58:56.611245       1 controller.go:690] Syncing backends for all LB services.
I0904 23:58:56.611490       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 23:58:56.611575       1 controller.go:753] Finished updateLoadBalancerHosts
I0904 23:58:56.611651       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 23:58:56.611711       1 controller.go:686] It took 0.000473108 seconds to finish syncNodes
I0904 23:58:56.611809       1 topologycache.go:179] Ignoring node capz-53p2mm-control-plane-n5vrz because it has an excluded label
I0904 23:58:56.611908       1 topologycache.go:183] Ignoring node capz-53p2mm-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-04 23:58:54 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 23:58:54 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 23:58:54 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 23:58:54 +0000 UTC 2022-09-04 23:58:33 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0904 23:58:56.612210       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0904 23:58:56.612454       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-53p2mm-mp-0000000"
I0904 23:58:56.613579       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000000"
I0904 23:58:56.622479       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-53p2mm-mp-0000000" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0904 23:58:56.623341       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000000"
I0904 23:58:56.756069       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-53p2mm-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 23:58:46 +0000 UTC,LastTransitionTime:2022-09-04 23:58:25 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 23:58:56 +0000 UTC,LastTransitionTime:2022-09-04 23:58:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 23:58:56.756361       1 node_lifecycle_controller.go:1092] Node capz-53p2mm-mp-0000000 ReadyCondition updated. Updating timestamp.
I0904 23:58:56.764772       1 node_lifecycle_controller.go:938] Node capz-53p2mm-mp-0000000 is healthy again, removing all taints
I0904 23:58:56.765046       1 node_lifecycle_controller.go:1092] Node capz-53p2mm-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 23:58:56.765277       1 node_lifecycle_controller.go:1259] Controller detected that zone uksouth::0 is now in state Normal.
I0904 23:58:56.765064       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-53p2mm-mp-0000000}
I0904 23:58:56.765520       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000000"
... skipping 71 lines ...
I0904 23:59:04.299972       1 topologycache.go:179] Ignoring node capz-53p2mm-control-plane-n5vrz because it has an excluded label
I0904 23:59:04.300038       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000001"
I0904 23:59:04.300136       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-53p2mm-mp-0000001"
I0904 23:59:04.308532       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000001"
I0904 23:59:04.309228       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-53p2mm-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0904 23:59:06.690224       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 23:59:06.766861       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-53p2mm-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 23:58:54 +0000 UTC,LastTransitionTime:2022-09-04 23:58:33 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 23:59:04 +0000 UTC,LastTransitionTime:2022-09-04 23:59:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 23:59:06.766911       1 node_lifecycle_controller.go:1092] Node capz-53p2mm-mp-0000001 ReadyCondition updated. Updating timestamp.
I0904 23:59:06.774700       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-53p2mm-mp-0000001}
I0904 23:59:06.775583       1 taint_manager.go:471] "Updating known taints on node" node="capz-53p2mm-mp-0000001" taints=[]
I0904 23:59:06.775822       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-53p2mm-mp-0000001"
I0904 23:59:06.776257       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000001"
I0904 23:59:06.776579       1 node_lifecycle_controller.go:938] Node capz-53p2mm-mp-0000001 is healthy again, removing all taints
... skipping 219 lines ...
I0904 23:59:13.573417       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-sbl84, PodDisruptionBudget controller will avoid syncing.
I0904 23:59:13.573541       1 disruption.go:482] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-sbl84"
I0904 23:59:13.572535       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-04 23:59:13.556260936 +0000 UTC m=+183.075834239 - now: 2022-09-04 23:59:13.572528544 +0000 UTC m=+183.092101847]
I0904 23:59:13.572558       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azuredisk-controller"
I0904 23:59:13.572604       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azuredisk-controller-6dbf65647f-sbl84"
I0904 23:59:13.573163       1 event.go:294] "Event occurred" object="kube-system/csi-azuredisk-controller-6dbf65647f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azuredisk-controller-6dbf65647f-sbl84"
I0904 23:59:13.570798       1 replica_set.go:394] Pod csi-azuredisk-controller-6dbf65647f-sbl84 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-6dbf65647f-sbl84", GenerateName:"csi-azuredisk-controller-6dbf65647f-", Namespace:"kube-system", SelfLink:"", UID:"c819fb4b-7446-4a2b-8c0d-2d0c134bb22f", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2022, time.September, 4, 23, 59, 13, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"6dbf65647f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-6dbf65647f", UID:"5956772f-ba59-4760-b8f8-613fd82f32f2", Controller:(*bool)(0xc001eb3377), BlockOwnerDeletion:(*bool)(0xc001eb3378)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 4, 23, 59, 13, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000d5f2c0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc000d5f308), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000d5f320), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-k7d6m", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0013847a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-k7d6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-k7d6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-k7d6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-k7d6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-k7d6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013848c0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-k7d6m", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc00224f1c0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001eb37a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00055d340), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001eb3820)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001eb3840)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001eb3848), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001eb384c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001c86b10), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 23:59:13.575135       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0bd6cd4613ee35d, ext:183077342968, loc:(*time.Location)(0x6f10040)}}
I0904 23:59:13.585246       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azuredisk-controller" duration="42.807411ms"
I0904 23:59:13.585275       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azuredisk-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azuredisk-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0904 23:59:13.585310       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azuredisk-controller" startTime="2022-09-04 23:59:13.585296686 +0000 UTC m=+183.104869989"
I0904 23:59:13.586152       1 deployment_util.go:775] Deployment "csi-azuredisk-controller" timed out (false) [last progress check: 2022-09-04 23:59:13 +0000 UTC - now: 2022-09-04 23:59:13.586147002 +0000 UTC m=+183.105720305]
I0904 23:59:13.592171       1 controller_utils.go:581] Controller csi-azuredisk-controller-6dbf65647f created pod csi-azuredisk-controller-6dbf65647f-p9tqk
I0904 23:59:13.592446       1 replica_set.go:394] Pod csi-azuredisk-controller-6dbf65647f-p9tqk created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azuredisk-controller-6dbf65647f-p9tqk", GenerateName:"csi-azuredisk-controller-6dbf65647f-", Namespace:"kube-system", SelfLink:"", UID:"43fcd472-b135-46df-a64b-6d1cd60d2d13", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2022, time.September, 4, 23, 59, 13, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azuredisk-controller", "pod-template-hash":"6dbf65647f"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azuredisk-controller-6dbf65647f", UID:"5956772f-ba59-4760-b8f8-613fd82f32f2", Controller:(*bool)(0xc001e656f7), BlockOwnerDeletion:(*bool)(0xc001e656f8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 4, 23, 59, 13, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001105830), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001105860), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001105878), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-r7p2l", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0013853a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"--feature-gates=Topology=true", "--csi-address=$(ADDRESS)", "--v=2", "--timeout=15s", "--leader-election", "--leader-election-namespace=kube-system", "--worker-threads=40", "--extra-create-metadata=true", "--strict-topology=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r7p2l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=600s", "-leader-election", "--leader-election-namespace=kube-system", "-worker-threads=500"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r7p2l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-leader-election", "--leader-election-namespace=kube-system", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r7p2l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "-leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=240s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r7p2l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29602", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r7p2l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azuredisk", Image:"mcr.microsoft.com/k8s/csi/azuredisk-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29604", "--user-agent-suffix=OSS-kubectl", "--disable-avset-nodes=false", "--allow-empty-cloud-config=false"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29602, ContainerPort:29602, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29604, ContainerPort:29604, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0013854c0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-r7p2l", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002380380), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001e65b50), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azuredisk-controller-sa", DeprecatedServiceAccount:"csi-azuredisk-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000988000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e65bc0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001e65be0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001e65be8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001e65bec), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001b2b5e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 23:59:13.594299       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azuredisk-controller-6dbf65647f", timestamp:time.Time{wall:0xc0bd6cd4613ee35d, ext:183077342968, loc:(*time.Location)(0x6f10040)}}
I0904 23:59:13.594537       1 replica_set.go:457] Pod csi-azuredisk-controller-6dbf65647f-sbl84 updated, objectMeta {Name:csi-azuredisk-controller-6dbf65647f-sbl84 GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:c819fb4b-7446-4a2b-8c0d-2d0c134bb22f ResourceVersion:940 Generation:0 CreationTimestamp:2022-09-04 23:59:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:5956772f-ba59-4760-b8f8-613fd82f32f2 Controller:0xc001eb3377 BlockOwnerDeletion:0xc001eb3378}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:59:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5956772f-ba59-4760-b8f8-613fd82f32f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azuredisk-controller-6dbf65647f-sbl84 GenerateName:csi-azuredisk-controller-6dbf65647f- Namespace:kube-system SelfLink: UID:c819fb4b-7446-4a2b-8c0d-2d0c134bb22f ResourceVersion:944 Generation:0 CreationTimestamp:2022-09-04 23:59:13 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azuredisk-controller pod-template-hash:6dbf65647f] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azuredisk-controller-6dbf65647f UID:5956772f-ba59-4760-b8f8-613fd82f32f2 Controller:0xc001e65c47 BlockOwnerDeletion:0xc001e65c48}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:59:13 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5956772f-ba59-4760-b8f8-613fd82f32f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azuredisk\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29602,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29604,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0904 23:59:13.592918       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azuredisk-controller-6dbf65647f-p9tqk" podUID=43fcd472-b135-46df-a64b-6d1cd60d2d13
I0904 23:59:13.592927       1 disruption.go:479] addPod called on pod "csi-azuredisk-controller-6dbf65647f-p9tqk"
I0904 23:59:13.595101       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azuredisk-controller-6dbf65647f-p9tqk, PodDisruptionBudget controller will avoid syncing.
I0904 23:59:13.595244       1 disruption.go:482] No matching pdb for pod "csi-azuredisk-controller-6dbf65647f-p9tqk"
... skipping 74 lines ...
I0904 23:59:19.140338       1 replica_set.go:457] Pod csi-snapshot-controller-84ccd6c756-n6vz9 updated, objectMeta {Name:csi-snapshot-controller-84ccd6c756-n6vz9 GenerateName:csi-snapshot-controller-84ccd6c756- Namespace:kube-system SelfLink: UID:8ba2af3e-c93f-49e3-922c-dd4ba4c75463 ResourceVersion:1006 Generation:0 CreationTimestamp:2022-09-04 23:59:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:84ccd6c756] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-84ccd6c756 UID:9ee7fcb4-9d79-4bde-b25b-d67e02710543 Controller:0xc0021decc7 BlockOwnerDeletion:0xc0021decc8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:59:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9ee7fcb4-9d79-4bde-b25b-d67e02710543\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-84ccd6c756-n6vz9 GenerateName:csi-snapshot-controller-84ccd6c756- Namespace:kube-system SelfLink: UID:8ba2af3e-c93f-49e3-922c-dd4ba4c75463 ResourceVersion:1008 Generation:0 CreationTimestamp:2022-09-04 23:59:19 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:84ccd6c756] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-84ccd6c756 UID:9ee7fcb4-9d79-4bde-b25b-d67e02710543 Controller:0xc0021df5c7 BlockOwnerDeletion:0xc0021df5c8}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:59:19 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9ee7fcb4-9d79-4bde-b25b-d67e02710543\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]}.
I0904 23:59:19.140511       1 disruption.go:494] updatePod called on pod "csi-snapshot-controller-84ccd6c756-n6vz9"
I0904 23:59:19.140569       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-snapshot-controller-84ccd6c756-n6vz9, PodDisruptionBudget controller will avoid syncing.
I0904 23:59:19.140576       1 disruption.go:497] No matching pdb for pod "csi-snapshot-controller-84ccd6c756-n6vz9"
I0904 23:59:19.140636       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-snapshot-controller-84ccd6c756-n6vz9"
I0904 23:59:19.141038       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="41.716975ms"
I0904 23:59:19.141484       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0904 23:59:19.141564       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-04 23:59:19.141549678 +0000 UTC m=+188.661123081"
I0904 23:59:19.143119       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-84ccd6c756-6wfch" podUID=5beeab18-d13a-4d6d-846b-afbfce7ec59c
I0904 23:59:19.143141       1 disruption.go:479] addPod called on pod "csi-snapshot-controller-84ccd6c756-6wfch"
I0904 23:59:19.143165       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-snapshot-controller-84ccd6c756-6wfch, PodDisruptionBudget controller will avoid syncing.
I0904 23:59:19.143171       1 disruption.go:482] No matching pdb for pod "csi-snapshot-controller-84ccd6c756-6wfch"
I0904 23:59:19.143371       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-04 23:59:19 +0000 UTC - now: 2022-09-04 23:59:19.143365734 +0000 UTC m=+188.662939037]
... skipping 614 lines ...
I0905 00:03:05.019475       1 pv_protection_controller.go:121] Processing PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875
I0905 00:03:05.040669       1 pv_protection_controller.go:198] Got event on PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875
I0905 00:03:05.040795       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875" with version 1707
I0905 00:03:05.040976       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875]: phase: Released, bound to: "azuredisk-8081/pvc-2nvcq (uid: a54b1c50-1f23-42b7-887a-15dbe8da8875)", boundByController: false
I0905 00:03:05.041067       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875]: volume is bound to claim azuredisk-8081/pvc-2nvcq
I0905 00:03:05.041150       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875]: claim azuredisk-8081/pvc-2nvcq not found
I0905 00:03:05.051521       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875: Operation cannot be fulfilled on persistentvolumes "pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875": the object has been modified; please apply your changes to the latest version and try again
I0905 00:03:05.051539       1 pv_protection_controller.go:124] Finished processing PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875 (32.033292ms)
E0905 00:03:05.051551       1 pv_protection_controller.go:114] PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875": the object has been modified; please apply your changes to the latest version and try again
I0905 00:03:05.051600       1 pv_protection_controller.go:121] Processing PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875
I0905 00:03:05.062651       1 pv_controller_base.go:238] volume "pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875" deleted
I0905 00:03:05.062803       1 pv_controller_base.go:589] deletion of claim "azuredisk-8081/pvc-2nvcq" was already processed
I0905 00:03:05.063741       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875
I0905 00:03:05.063755       1 pv_protection_controller.go:124] Finished processing PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875 (12.145086ms)
I0905 00:03:05.063771       1 pv_protection_controller.go:121] Processing PV pvc-a54b1c50-1f23-42b7-887a-15dbe8da8875
... skipping 2752 lines ...
I0905 00:12:32.851555       1 pv_controller.go:255] synchronizing PersistentVolumeClaim[azuredisk-1353/pvc-p2fzt]: phase: Pending, bound to: "", bindCompleted: false, boundByController: false
I0905 00:12:32.851574       1 pv_controller.go:350] synchronizing unbound PersistentVolumeClaim[azuredisk-1353/pvc-p2fzt]: no volume found
I0905 00:12:32.851586       1 pv_controller.go:1535] provisionClaim[azuredisk-1353/pvc-p2fzt]: started
I0905 00:12:32.851595       1 pv_controller.go:1851] scheduleOperation[provision-azuredisk-1353/pvc-p2fzt[73b44c3f-1305-458c-bcae-fabadcc9bab0]]
I0905 00:12:32.851613       1 pv_controller.go:1788] provisionClaimOperationExternal [azuredisk-1353/pvc-p2fzt] started, class: "azuredisk-1353-kubernetes.io-azure-disk-dynamic-sc-hp6t4"
I0905 00:12:32.855072       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv" duration="38.732974ms"
I0905 00:12:32.855101       1 deployment_controller.go:497] "Error syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv" err="Operation cannot be fulfilled on deployments.apps \"azuredisk-volume-tester-jqdxv\": the object has been modified; please apply your changes to the latest version and try again"
I0905 00:12:32.855134       1 deployment_controller.go:583] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv" startTime="2022-09-05 00:12:32.85511771 +0000 UTC m=+982.374691113"
I0905 00:12:32.855880       1 deployment_util.go:775] Deployment "azuredisk-volume-tester-jqdxv" timed out (false) [last progress check: 2022-09-05 00:12:32 +0000 UTC - now: 2022-09-05 00:12:32.855875927 +0000 UTC m=+982.375449330]
I0905 00:12:32.855561       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f"
I0905 00:12:32.855607       1 replica_set.go:667] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f" (30.517689ms)
I0905 00:12:32.856123       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f", timestamp:time.Time{wall:0xc0bd6d9c31417eb5, ext:982345949264, loc:(*time.Location)(0x6f10040)}}
I0905 00:12:32.856227       1 replica_set_utils.go:59] Updating status for : azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
... skipping 250 lines ...
I0905 00:12:52.816329       1 replica_set.go:667] Finished syncing ReplicaSet "azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f" (292.249µs)
I0905 00:12:52.818716       1 deployment_controller.go:183] "Updating deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv"
I0905 00:12:52.819116       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv" duration="4.068846ms"
I0905 00:12:52.819269       1 deployment_controller.go:583] "Started syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv" startTime="2022-09-05 00:12:52.819257902 +0000 UTC m=+1002.338831205"
I0905 00:12:52.819540       1 progress.go:195] Queueing up deployment "azuredisk-volume-tester-jqdxv" for a progress check after 597s
I0905 00:12:52.819565       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azuredisk-1353/azuredisk-volume-tester-jqdxv" duration="298.962µs"
I0905 00:12:52.823973       1 reconciler.go:420] "Multi-Attach error: volume is already used by pods" pods=[azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f-mmgd8] attachedTo=[capz-53p2mm-mp-0000000] volume={VolumeToAttach:{MultiAttachErrorReported:false VolumeName:kubernetes.io/csi/disk.csi.azure.com^/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-53p2mm/providers/Microsoft.Compute/disks/pvc-73b44c3f-1305-458c-bcae-fabadcc9bab0 VolumeSpec:0xc000fe00c0 NodeName:capz-53p2mm-mp-0000001 ScheduledPods:[&Pod{ObjectMeta:{azuredisk-volume-tester-jqdxv-fbd977b6f-7hbzh azuredisk-volume-tester-jqdxv-fbd977b6f- azuredisk-1353  fe96bb81-c142-486a-b076-7ba5ba4c7b30 3639 0 2022-09-05 00:12:52 +0000 UTC <nil> <nil> map[app:azuredisk-volume-tester-2050257992909156333 pod-template-hash:fbd977b6f] map[] [{apps/v1 ReplicaSet azuredisk-volume-tester-jqdxv-fbd977b6f bf6c5392-b977-49ca-a529-943a525dc125 0xc002667a97 0xc002667a98}] [] [{kube-controller-manager Update v1 2022-09-05 00:12:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf6c5392-b977-49ca-a529-943a525dc125\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:test-volume-1,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:&PersistentVolumeClaimVolumeSource{ClaimName:pvc-p2fzt,ReadOnly:false,},RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},Volume{Name:kube-api-access-vr2qx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:volume-tester,Image:k8s.gcr.io/e2e-test-images/busybox:1.29-2,Command:[/bin/sh],Args:[-c echo 'hello world' >> /mnt/test-1/data && while true; do sleep 3600; done],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-volume-1,ReadOnly:false,MountPath:/mnt/test-1,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vr2qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{kubernetes.io/os: linux,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-53p2mm-mp-0000001,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,HostUsers:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-09-05 00:12:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}]}}
I0905 00:12:52.824205       1 event.go:294] "Event occurred" object="azuredisk-1353/azuredisk-volume-tester-jqdxv-fbd977b6f-7hbzh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="FailedAttachVolume" message="Multi-Attach error for volume \"pvc-73b44c3f-1305-458c-bcae-fabadcc9bab0\" Volume is already used by pod(s) azuredisk-volume-tester-jqdxv-fbd977b6f-mmgd8"
I0905 00:12:57.666283       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="95.002µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:55942" resp=200
I0905 00:12:58.692960       1 reflector.go:559] vendor/k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Service total 10 items received
I0905 00:13:01.731368       1 gc_controller.go:221] GC'ing orphaned
I0905 00:13:01.731397       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0905 00:13:02.161816       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-53p2mm-mp-0000001"
I0905 00:13:06.728953       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
... skipping 3604 lines ...
I0905 00:21:47.093926       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-a804453c-352c-4728-a28c-fea62a333a14]: claim azuredisk-7051/pvc-z75z9 not found
I0905 00:21:47.100428       1 pv_protection_controller.go:198] Got event on PV pvc-a804453c-352c-4728-a28c-fea62a333a14
I0905 00:21:47.100460       1 pv_controller_base.go:726] storeObjectUpdate updating volume "pvc-a804453c-352c-4728-a28c-fea62a333a14" with version 5576
I0905 00:21:47.100480       1 pv_controller.go:551] synchronizing PersistentVolume[pvc-a804453c-352c-4728-a28c-fea62a333a14]: phase: Released, bound to: "azuredisk-7051/pvc-z75z9 (uid: a804453c-352c-4728-a28c-fea62a333a14)", boundByController: false
I0905 00:21:47.100502       1 pv_controller.go:585] synchronizing PersistentVolume[pvc-a804453c-352c-4728-a28c-fea62a333a14]: volume is bound to claim azuredisk-7051/pvc-z75z9
I0905 00:21:47.100575       1 pv_controller.go:619] synchronizing PersistentVolume[pvc-a804453c-352c-4728-a28c-fea62a333a14]: claim azuredisk-7051/pvc-z75z9 not found
I0905 00:21:47.104110       1 pv_protection_controller.go:173] Error removing protection finalizer from PV pvc-a804453c-352c-4728-a28c-fea62a333a14: Operation cannot be fulfilled on persistentvolumes "pvc-a804453c-352c-4728-a28c-fea62a333a14": the object has been modified; please apply your changes to the latest version and try again
I0905 00:21:47.104276       1 pv_protection_controller.go:124] Finished processing PV pvc-a804453c-352c-4728-a28c-fea62a333a14 (10.673739ms)
E0905 00:21:47.104365       1 pv_protection_controller.go:114] PV pvc-a804453c-352c-4728-a28c-fea62a333a14 failed with : Operation cannot be fulfilled on persistentvolumes "pvc-a804453c-352c-4728-a28c-fea62a333a14": the object has been modified; please apply your changes to the latest version and try again
I0905 00:21:47.104471       1 pv_protection_controller.go:121] Processing PV pvc-a804453c-352c-4728-a28c-fea62a333a14
I0905 00:21:47.107966       1 pv_protection_controller.go:176] Removed protection finalizer from PV pvc-a804453c-352c-4728-a28c-fea62a333a14
I0905 00:21:47.107987       1 pv_protection_controller.go:124] Finished processing PV pvc-a804453c-352c-4728-a28c-fea62a333a14 (3.474877ms)
I0905 00:21:47.109023       1 pv_controller_base.go:238] volume "pvc-a804453c-352c-4728-a28c-fea62a333a14" deleted
I0905 00:21:47.109051       1 pv_controller_base.go:589] deletion of claim "azuredisk-7051/pvc-z75z9" was already processed
I0905 00:21:47.110265       1 pv_protection_controller.go:121] Processing PV pvc-a804453c-352c-4728-a28c-fea62a333a14
... skipping 690 lines ...
I0905 00:23:31.308293       1 namespace_controller.go:180] Finished syncing namespace "azuredisk-4415" (100.102µs)
2022/09/05 00:23:32 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 12 of 59 Specs in 1292.117 seconds
SUCCESS! -- 12 Passed | 0 Failed | 0 Pending | 47 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-tdzf6, container manager
STEP: Dumping workload cluster default/capz-53p2mm logs
Sep  5 00:25:08.312: INFO: Collecting logs for Linux node capz-53p2mm-control-plane-n5vrz in cluster capz-53p2mm in namespace default

Sep  5 00:26:08.314: INFO: Collecting boot logs for AzureMachine capz-53p2mm-control-plane-n5vrz

Failed to get logs for machine capz-53p2mm-control-plane-q7b54, cluster default/capz-53p2mm: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  5 00:26:09.771: INFO: Collecting logs for Linux node capz-53p2mm-mp-0000000 in cluster capz-53p2mm in namespace default

Sep  5 00:27:09.774: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-53p2mm-mp-0

Sep  5 00:27:10.292: INFO: Collecting logs for Linux node capz-53p2mm-mp-0000001 in cluster capz-53p2mm in namespace default

Sep  5 00:28:10.294: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-53p2mm-mp-0

Failed to get logs for machine pool capz-53p2mm-mp-0, cluster default/capz-53p2mm: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-53p2mm kube-system pod logs
STEP: Fetching kube-system pod logs took 1.014282566s
STEP: Collecting events for Pod kube-system/calico-node-2llnt
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-p9tqk, container csi-provisioner
STEP: Collecting events for Pod kube-system/calico-node-f2wsk
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-6rcsl, container coredns
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-53p2mm-control-plane-n5vrz, container kube-apiserver
STEP: Collecting events for Pod kube-system/kube-proxy-gj7dv
STEP: Creating log watcher for controller kube-system/calico-node-79nd4, container calico-node
STEP: Collecting events for Pod kube-system/calico-node-79nd4
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-53p2mm-control-plane-n5vrz
STEP: Creating log watcher for controller kube-system/calico-node-f2wsk, container calico-node
STEP: failed to find events of Pod "kube-apiserver-capz-53p2mm-control-plane-n5vrz"
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-53p2mm-control-plane-n5vrz, container kube-controller-manager
STEP: Collecting events for Pod kube-system/csi-azuredisk-node-kd4mv
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-p9tqk, container csi-attacher
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-s555t, container liveness-probe
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-sbl84, container liveness-probe
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-53p2mm-control-plane-n5vrz
STEP: failed to find events of Pod "kube-controller-manager-capz-53p2mm-control-plane-n5vrz"
STEP: Creating log watcher for controller kube-system/kube-proxy-gj7dv, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-s9qjq, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/csi-azuredisk-controller-6dbf65647f-p9tqk
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-p9tqk, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-sbl84, container azuredisk
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-sbl84, container csi-provisioner
... skipping 12 lines ...
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-zrrx8
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-53p2mm-control-plane-n5vrz
STEP: Collecting events for Pod kube-system/kube-proxy-j2kdz
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-kd4mv, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-p9tqk, container azuredisk
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-6rcsl
STEP: failed to find events of Pod "kube-scheduler-capz-53p2mm-control-plane-n5vrz"
STEP: Creating log watcher for controller kube-system/kube-proxy-n7phh, container kube-proxy
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-hkr6q, container liveness-probe
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-84ccd6c756-6wfch
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-zrrx8, container coredns
STEP: Collecting events for Pod kube-system/kube-proxy-n7phh
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-n6vz9, container csi-snapshot-controller
... skipping 2 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-53p2mm-control-plane-n5vrz, container kube-scheduler
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-sbl84, container csi-snapshotter
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-kd4mv, container azuredisk
STEP: Creating log watcher for controller kube-system/csi-azuredisk-node-hkr6q, container node-driver-registrar
STEP: Creating log watcher for controller kube-system/csi-azuredisk-controller-6dbf65647f-sbl84, container csi-resizer
STEP: Collecting events for Pod kube-system/etcd-capz-53p2mm-control-plane-n5vrz
STEP: failed to find events of Pod "etcd-capz-53p2mm-control-plane-n5vrz"
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-6wfch, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-84ccd6c756-n6vz9
STEP: Creating log watcher for controller kube-system/etcd-capz-53p2mm-control-plane-n5vrz, container etcd
STEP: Fetching activity logs took 3.565929256s
================ REDACTING LOGS ================
All sensitive variables are redacted
... skipping 15 lines ...