This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-04 09:23
Elapsed32m44s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 704 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 141 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-vo7bw1-kubeconfig; do sleep 1; done"
capz-vo7bw1-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-vo7bw1-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
capz-vo7bw1-control-plane-96q6v   NotReady   control-plane   7s    v1.26.0-alpha.0.376+e7192a49552483
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-vo7bw1-control-plane-96q6v condition met
node/capz-vo7bw1-md-0-7wvgh condition met
... skipping 63 lines ...
Dynamic Provisioning 
  should create a storage account with tags [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/dynamic_provisioning_test.go:73
STEP: Creating a kubernetes client
Sep  4 09:39:09.805: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Sep  4 09:39:10.060: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/04 09:39:10 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/04 09:39:10 Check successfully
... skipping 44 lines ...
Sep  4 09:39:31.425: INFO: PersistentVolumeClaim pvc-4s8qp found but phase is Pending instead of Bound.
Sep  4 09:39:33.456: INFO: PersistentVolumeClaim pvc-4s8qp found and phase=Bound (22.428075584s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  4 09:39:33.550: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-48h8q" in namespace "azurefile-2540" to be "Succeeded or Failed"
Sep  4 09:39:33.581: INFO: Pod "azurefile-volume-tester-48h8q": Phase="Pending", Reason="", readiness=false. Elapsed: 30.551985ms
Sep  4 09:39:35.614: INFO: Pod "azurefile-volume-tester-48h8q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063752327s
Sep  4 09:39:37.647: INFO: Pod "azurefile-volume-tester-48h8q": Phase="Running", Reason="", readiness=false. Elapsed: 4.097356425s
Sep  4 09:39:39.681: INFO: Pod "azurefile-volume-tester-48h8q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130783826s
STEP: Saw pod success
Sep  4 09:39:39.681: INFO: Pod "azurefile-volume-tester-48h8q" satisfied condition "Succeeded or Failed"
Sep  4 09:39:39.681: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-48h8q"
Sep  4 09:39:39.728: INFO: Pod azurefile-volume-tester-48h8q has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-48h8q in namespace azurefile-2540
Sep  4 09:39:39.775: INFO: deleting PVC "azurefile-2540"/"pvc-4s8qp"
Sep  4 09:39:39.775: INFO: Deleting PersistentVolumeClaim "pvc-4s8qp"
... skipping 156 lines ...
Sep  4 09:41:30.423: INFO: PersistentVolumeClaim pvc-slqv2 found but phase is Pending instead of Bound.
Sep  4 09:41:32.460: INFO: PersistentVolumeClaim pvc-slqv2 found and phase=Bound (22.387784915s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  4 09:41:32.553: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-fc5lf" in namespace "azurefile-2790" to be "Error status code"
Sep  4 09:41:32.583: INFO: Pod "azurefile-volume-tester-fc5lf": Phase="Pending", Reason="", readiness=false. Elapsed: 29.96477ms
Sep  4 09:41:34.616: INFO: Pod "azurefile-volume-tester-fc5lf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06336433s
Sep  4 09:41:36.649: INFO: Pod "azurefile-volume-tester-fc5lf": Phase="Failed", Reason="", readiness=false. Elapsed: 4.096532718s
STEP: Saw pod failure
Sep  4 09:41:36.649: INFO: Pod "azurefile-volume-tester-fc5lf" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  4 09:41:36.692: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-fc5lf"
Sep  4 09:41:36.728: INFO: Pod azurefile-volume-tester-fc5lf has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-fc5lf in namespace azurefile-2790
Sep  4 09:41:36.766: INFO: deleting PVC "azurefile-2790"/"pvc-slqv2"
... skipping 180 lines ...
Sep  4 09:43:26.898: INFO: PersistentVolumeClaim pvc-7dms7 found but phase is Pending instead of Bound.
Sep  4 09:43:28.930: INFO: PersistentVolumeClaim pvc-7dms7 found and phase=Bound (2.062078423s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  4 09:43:29.022: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-gqjr6" in namespace "azurefile-4538" to be "Succeeded or Failed"
Sep  4 09:43:29.055: INFO: Pod "azurefile-volume-tester-gqjr6": Phase="Pending", Reason="", readiness=false. Elapsed: 32.445532ms
Sep  4 09:43:31.088: INFO: Pod "azurefile-volume-tester-gqjr6": Phase="Running", Reason="", readiness=true. Elapsed: 2.06623304s
Sep  4 09:43:33.122: INFO: Pod "azurefile-volume-tester-gqjr6": Phase="Running", Reason="", readiness=false. Elapsed: 4.099642783s
Sep  4 09:43:35.155: INFO: Pod "azurefile-volume-tester-gqjr6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132675007s
STEP: Saw pod success
Sep  4 09:43:35.155: INFO: Pod "azurefile-volume-tester-gqjr6" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  4 09:44:05.937: INFO: deleting Pod "azurefile-4538"/"azurefile-volume-tester-gqjr6"
... skipping 863 lines ...
I0904 09:35:11.081222       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662284111\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662284110\" (2022-09-04 08:35:10 +0000 UTC to 2023-09-04 08:35:10 +0000 UTC (now=2022-09-04 09:35:11.081192658 +0000 UTC))"
I0904 09:35:11.081496       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0904 09:35:11.080485       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0904 09:35:11.080508       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0904 09:35:11.081591       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0904 09:35:11.082068       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
E0904 09:35:15.763904       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 09:35:15.764124       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0904 09:35:18.155927       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0904 09:35:18.156301       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-vo7bw1-control-plane-96q6v_86baf1da-4c6b-4731-bc80-b93c69e73df0 became leader"
W0904 09:35:18.183053       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0904 09:35:18.184078       1 azure_auth.go:232] Using AzurePublicCloud environment
I0904 09:35:18.184387       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0904 09:35:18.184626       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0904 09:35:18.186843       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
I0904 09:35:18.186663       1 reflector.go:221] Starting reflector *v1.ServiceAccount (20h3m10.040708951s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 09:35:18.187187       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0904 09:35:18.186743       1 reflector.go:221] Starting reflector *v1.Secret (20h3m10.040708951s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 09:35:18.187629       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0904 09:35:18.186763       1 shared_informer.go:255] Waiting for caches to sync for tokens
W0904 09:35:18.211968       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 09:35:18.211999       1 controllermanager.go:573] Starting "serviceaccount"
I0904 09:35:18.225192       1 controllermanager.go:602] Started "serviceaccount"
I0904 09:35:18.225238       1 controllermanager.go:573] Starting "tokencleaner"
I0904 09:35:18.225522       1 serviceaccounts_controller.go:117] Starting service account controller
I0904 09:35:18.225542       1 shared_informer.go:255] Waiting for caches to sync for service account
I0904 09:35:18.239196       1 controllermanager.go:602] Started "tokencleaner"
... skipping 44 lines ...
I0904 09:35:18.361403       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0904 09:35:18.361523       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 09:35:18.361700       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 09:35:18.361876       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 09:35:18.361997       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0904 09:35:18.362129       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0904 09:35:18.362310       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 09:35:18.362329       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 09:35:18.362511       1 controllermanager.go:602] Started "persistentvolume-binder"
I0904 09:35:18.362641       1 controllermanager.go:573] Starting "endpoint"
I0904 09:35:18.362796       1 pv_controller_base.go:318] Starting persistent volume controller
I0904 09:35:18.362818       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0904 09:35:18.511630       1 controllermanager.go:602] Started "endpoint"
... skipping 83 lines ...
I0904 09:35:20.519700       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0904 09:35:20.519861       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 09:35:20.520210       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 09:35:20.520367       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 09:35:20.520527       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0904 09:35:20.520624       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0904 09:35:20.520798       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 09:35:20.520939       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 09:35:20.521167       1 controllermanager.go:602] Started "attachdetach"
I0904 09:35:20.521308       1 controllermanager.go:573] Starting "endpointslice"
I0904 09:35:20.523117       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-control-plane-96q6v"
W0904 09:35:20.523307       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-vo7bw1-control-plane-96q6v" does not exist
I0904 09:35:20.523737       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 09:35:20.523911       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0904 09:35:20.563503       1 controllermanager.go:602] Started "endpointslice"
I0904 09:35:20.563801       1 controllermanager.go:573] Starting "resourcequota"
I0904 09:35:20.564166       1 endpointslice_controller.go:261] Starting endpoint slice controller
I0904 09:35:20.564351       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
... skipping 414 lines ...
I0904 09:35:23.218910       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-84994b8c4" need=2 creating=2
I0904 09:35:23.219080       1 request.go:614] Waited for 464.144603ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?limit=500&resourceVersion=0
I0904 09:35:23.219527       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-84994b8c4 to 2"
I0904 09:35:23.229831       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0904 09:35:23.229908       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 09:35:23.218446178 +0000 UTC m=+13.513544850 - now: 2022-09-04 09:35:23.229899837 +0000 UTC m=+13.524998409]
I0904 09:35:23.234344       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="464.283004ms"
I0904 09:35:23.234379       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:35:23.234424       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 09:35:23.234405461 +0000 UTC m=+13.529504133"
I0904 09:35:23.235065       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 09:35:23 +0000 UTC - now: 2022-09-04 09:35:23.235058064 +0000 UTC m=+13.530156636]
I0904 09:35:23.239722       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0904 09:35:23.240069       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="5.650929ms"
I0904 09:35:23.240224       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 09:35:23.240208991 +0000 UTC m=+13.535307563"
I0904 09:35:23.240770       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 09:35:23 +0000 UTC - now: 2022-09-04 09:35:23.240762494 +0000 UTC m=+13.535861066]
... skipping 74 lines ...
I0904 09:35:23.624101       1 disruption.go:497] No matching pdb for pod "kube-scheduler-capz-vo7bw1-control-plane-96q6v"
I0904 09:35:23.626167       1 event.go:294] "Event occurred" object="kube-system/kube-scheduler-capz-vo7bw1-control-plane-96q6v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0904 09:35:23.634781       1 disruption.go:494] updatePod called on pod "kube-controller-manager-capz-vo7bw1-control-plane-96q6v"
I0904 09:35:23.635066       1 disruption.go:570] No PodDisruptionBudgets found for pod kube-controller-manager-capz-vo7bw1-control-plane-96q6v, PodDisruptionBudget controller will avoid syncing.
I0904 09:35:23.635252       1 disruption.go:497] No matching pdb for pod "kube-controller-manager-capz-vo7bw1-control-plane-96q6v"
I0904 09:35:23.635955       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager-capz-vo7bw1-control-plane-96q6v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0904 09:35:23.636500       1 controller_utils.go:146] "Failed to update status for pod" pod="kube-system/kube-apiserver-capz-vo7bw1-control-plane-96q6v" err="Operation cannot be fulfilled on pods \"kube-apiserver-capz-vo7bw1-control-plane-96q6v\": the object has been modified; please apply your changes to the latest version and try again"
W0904 09:35:23.636749       1 node_lifecycle_controller.go:1344] Unable to mark pod {namespace:kube-system name:kube-apiserver-capz-vo7bw1-control-plane-96q6v} NotReady on node capz-vo7bw1-control-plane-96q6v: Operation cannot be fulfilled on pods "kube-apiserver-capz-vo7bw1-control-plane-96q6v": the object has been modified; please apply your changes to the latest version and try again.
I0904 09:35:23.637011       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-capz-vo7bw1-control-plane-96q6v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0904 09:35:23.642406       1 controller_utils.go:120] "Update ready status of pods on node" node="capz-vo7bw1-control-plane-96q6v"
I0904 09:35:23.642838       1 controller_utils.go:138] "Updating ready status of pod to false" pod="kube-apiserver-capz-vo7bw1-control-plane-96q6v"
I0904 09:35:23.657167       1 request.go:614] Waited for 128.256265ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/replicaset-controller/token
I0904 09:35:23.658621       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-capz-vo7bw1-control-plane-96q6v" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
... skipping 246 lines ...
I0904 09:35:32.541375       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/metrics-server-76f7667fbf"
I0904 09:35:32.541908       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-76f7667fbf" (38.375726ms)
I0904 09:35:32.542142       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-76f7667fbf", timestamp:time.Time{wall:0xc0bd3a391e0749d6, ext:22798892806, loc:(*time.Location)(0x6f10040)}}
I0904 09:35:32.542483       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-76f7667fbf, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0904 09:35:32.552595       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-76f7667fbf" (10.457462ms)
I0904 09:35:32.553848       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="60.811358ms"
I0904 09:35:32.554044       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:35:32.554211       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-04 09:35:32.554191731 +0000 UTC m=+22.849290403"
I0904 09:35:32.554932       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-04 09:35:32 +0000 UTC - now: 2022-09-04 09:35:32.554923936 +0000 UTC m=+22.850022508]
I0904 09:35:32.555651       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-76f7667fbf", timestamp:time.Time{wall:0xc0bd3a391e0749d6, ext:22798892806, loc:(*time.Location)(0x6f10040)}}
I0904 09:35:32.555939       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/metrics-server-76f7667fbf"
I0904 09:35:32.556080       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-76f7667fbf" (264.302µs)
I0904 09:35:32.638806       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="84.544299ms"
I0904 09:35:32.639079       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-04 09:35:32.639055932 +0000 UTC m=+22.934154504"
I0904 09:35:32.640576       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/metrics-server"
I0904 09:35:32.666765       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="27.690263ms"
I0904 09:35:32.667014       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:35:32.667175       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-04 09:35:32.667157397 +0000 UTC m=+22.962256069"
I0904 09:35:32.690625       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/metrics-server"
I0904 09:35:32.694958       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="27.767564ms"
I0904 09:35:32.695063       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-04 09:35:32.694996661 +0000 UTC m=+22.990095333"
I0904 09:35:32.695869       1 deployment_util.go:775] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-04 09:35:32 +0000 UTC - now: 2022-09-04 09:35:32.695838166 +0000 UTC m=+22.990936738]
I0904 09:35:32.695916       1 progress.go:195] Queueing up deployment "metrics-server" for a progress check after 599s
... skipping 19 lines ...
I0904 09:35:34.107861       1 controller_utils.go:581] Controller calico-kube-controllers-755ff8d7b5 created pod calico-kube-controllers-755ff8d7b5-5cmj2
I0904 09:35:34.108095       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 09:35:34.108377       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0904 09:35:34.108592       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-755ff8d7b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-755ff8d7b5-5cmj2"
I0904 09:35:34.109417       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 09:35:34.097011842 +0000 UTC m=+24.392110414 - now: 2022-09-04 09:35:34.109407815 +0000 UTC m=+24.404506387]
I0904 09:35:34.128802       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="37.852724ms"
I0904 09:35:34.129023       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:35:34.129186       1 disruption.go:448] add DB "calico-kube-controllers"
I0904 09:35:34.129600       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (33.411098ms)
I0904 09:35:34.129652       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bd3a3985bf5642, ext:24391524210, loc:(*time.Location)(0x6f10040)}}
I0904 09:35:34.129848       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 09:35:34.129194       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 09:35:34.129175832 +0000 UTC m=+24.424274404"
I0904 09:35:34.130508       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 09:35:34 +0000 UTC - now: 2022-09-04 09:35:34.13049884 +0000 UTC m=+24.425597512]
... skipping 29 lines ...
I0904 09:35:34.189897       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="1.318308ms"
I0904 09:35:34.188524       1 disruption.go:659] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (15.934995ms)
I0904 09:35:34.188960       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-node-5kxgp" podUID=32e336d0-3906-49d8-9728-511a203a3aa7
I0904 09:35:34.188970       1 disruption.go:479] addPod called on pod "calico-node-5kxgp"
I0904 09:35:34.188997       1 daemon_controller.go:520] Pod calico-node-5kxgp added.
I0904 09:35:34.189039       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/calico-node-5kxgp"
E0904 09:35:34.190780       1 disruption.go:614] Error syncing PodDisruptionBudget kube-system/calico-kube-controllers, requeuing: Operation cannot be fulfilled on poddisruptionbudgets.policy "calico-kube-controllers": the object has been modified; please apply your changes to the latest version and try again
I0904 09:35:34.191036       1 disruption.go:659] Finished syncing PodDisruptionBudget "kube-system/calico-kube-controllers" (34.201µs)
I0904 09:35:34.190792       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-node-5kxgp, PodDisruptionBudget controller will avoid syncing.
I0904 09:35:34.191306       1 disruption.go:482] No matching pdb for pod "calico-node-5kxgp"
I0904 09:35:34.190801       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a398ad317c7, ext:24476704915, loc:(*time.Location)(0x6f10040)}}
I0904 09:35:34.190712       1 controller_utils.go:581] Controller calico-node created pod calico-node-5kxgp
I0904 09:35:34.191402       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
... skipping 228 lines ...
I0904 09:35:52.807891       1 shared_informer.go:255] Waiting for caches to sync for resource quota
I0904 09:35:52.807908       1 resource_quota_monitor.go:283] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0904 09:35:52.808053       1 reflector.go:221] Starting reflector *v1.PartialObjectMetadata (12h7m16.761708729s) from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 09:35:52.808065       1 reflector.go:257] Listing and watching *v1.PartialObjectMetadata from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 09:35:52.808376       1 reflector.go:221] Starting reflector *v1.PartialObjectMetadata (14h48m15.294989168s) from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 09:35:52.808388       1 reflector.go:257] Listing and watching *v1.PartialObjectMetadata from vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90
I0904 09:35:52.869599       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-vo7bw1-control-plane-96q6v transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 09:35:30 +0000 UTC,LastTransitionTime:2022-09-04 09:34:58 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 09:35:50 +0000 UTC,LastTransitionTime:2022-09-04 09:35:50 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 09:35:52.869728       1 node_lifecycle_controller.go:1092] Node capz-vo7bw1-control-plane-96q6v ReadyCondition updated. Updating timestamp.
I0904 09:35:52.869761       1 node_lifecycle_controller.go:938] Node capz-vo7bw1-control-plane-96q6v is healthy again, removing all taints
I0904 09:35:52.869783       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 09:35:52.908244       1 resource_quota_monitor.go:283] quota monitor not synced: crd.projectcalico.org/v1, Resource=networksets
I0904 09:35:53.008022       1 shared_informer.go:285] caches populated
I0904 09:35:53.008051       1 shared_informer.go:262] Caches are synced for resource quota
I0904 09:35:53.008084       1 resource_quota_controller.go:462] synced quota controller
W0904 09:35:53.284641       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 09:35:53.284827       1 garbagecollector.go:220] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0904 09:35:53.284837       1 garbagecollector.go:226] reset restmapper
E0904 09:35:53.292213       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0904 09:35:53.301776       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0904 09:35:53.302889       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=caliconodestatuses", kind "crd.projectcalico.org/v1, Kind=CalicoNodeStatus"
I0904 09:35:53.302989       1 graph_builder.go:176] using a shared informer for resource "crd.projectcalico.org/v1, Resource=ipamblocks", kind "crd.projectcalico.org/v1, Kind=IPAMBlock"
... skipping 241 lines ...
I0904 09:36:22.769064       1 pv_controller_base.go:612] resyncing PV controller
I0904 09:36:22.814549       1 gc_controller.go:221] GC'ing orphaned
I0904 09:36:22.814576       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0904 09:36:22.873383       1 node_lifecycle_controller.go:1092] Node capz-vo7bw1-control-plane-96q6v ReadyCondition updated. Updating timestamp.
E0904 09:36:23.018804       1 resource_quota_controller.go:417] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0904 09:36:23.018873       1 resource_quota_controller.go:432] no resource updates from discovery, skipping resource quota sync
W0904 09:36:24.143641       1 garbagecollector.go:752] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0904 09:36:26.049097       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="108.7µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:55210" resp=200
I0904 09:36:27.582852       1 certificate_controller.go:76] Adding certificate request csr-4c6tx
I0904 09:36:27.582905       1 certificate_controller.go:167] Finished syncing certificate request "csr-4c6tx" (9.6µs)
I0904 09:36:27.582926       1 certificate_controller.go:76] Adding certificate request csr-4c6tx
I0904 09:36:27.582945       1 certificate_controller.go:167] Finished syncing certificate request "csr-4c6tx" (6.9µs)
I0904 09:36:27.582958       1 certificate_controller.go:76] Adding certificate request csr-4c6tx
... skipping 54 lines ...
I0904 09:36:36.856427       1 certificate_controller.go:81] Updating certificate request csr-s5sg8
I0904 09:36:36.856541       1 certificate_controller.go:167] Finished syncing certificate request "csr-s5sg8" (700ns)
I0904 09:36:37.714029       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 09:36:37.770345       1 pv_controller_base.go:612] resyncing PV controller
I0904 09:36:39.460730       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-ks3yhm" (15.8µs)
I0904 09:36:40.749279       1 topologycache.go:179] Ignoring node capz-vo7bw1-control-plane-96q6v because it has an excluded label
I0904 09:36:40.749409       1 topologycache.go:183] Ignoring node capz-vo7bw1-md-0-tz559 because it is not ready: [{MemoryPressure False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-vo7bw1-md-0-tz559" not found]}]
I0904 09:36:40.749585       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0904 09:36:40.749693       1 controller.go:690] Syncing backends for all LB services.
I0904 09:36:40.749768       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 09:36:40.749863       1 controller.go:753] Finished updateLoadBalancerHosts
I0904 09:36:40.749956       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 09:36:40.750120       1 controller.go:686] It took 0.000426202 seconds to finish syncNodes
I0904 09:36:40.750213       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-md-0-tz559"
W0904 09:36:40.750301       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-vo7bw1-md-0-tz559" does not exist
I0904 09:36:40.750645       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-vo7bw1-md-0-tz559}
I0904 09:36:40.750752       1 taint_manager.go:471] "Updating known taints on node" node="capz-vo7bw1-md-0-tz559" taints=[]
I0904 09:36:40.753016       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd3a37ca8b3f34, ext:17471996516, loc:(*time.Location)(0x6f10040)}}
I0904 09:36:40.754813       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd3a4a2cfd6588, ext:91049902776, loc:(*time.Location)(0x6f10040)}}
I0904 09:36:40.753754       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a3f593c5f0b, ext:47718485463, loc:(*time.Location)(0x6f10040)}}
I0904 09:36:40.755222       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a4a2d03b26e, ext:91050315678, loc:(*time.Location)(0x6f10040)}}
... skipping 147 lines ...
I0904 09:36:44.799345       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/kube-proxy" (1.389907ms)
I0904 09:36:46.049733       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="115.5µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:40918" resp=200
I0904 09:36:46.272396       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-vo7bw1/providers/Microsoft.Compute/virtualMachines/capz-vo7bw1-md-0-tz559), assuming it is managed by availability set: not a vmss instance
I0904 09:36:46.272496       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-vo7bw1/providers/Microsoft.Compute/virtualMachines/capz-vo7bw1-md-0-tz559), assuming it is managed by availability set: not a vmss instance
I0904 09:36:46.272532       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-vo7bw1-md-0-tz559"
I0904 09:36:46.272556       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-vo7bw1-md-0-tz559"
I0904 09:36:49.696068       1 topologycache.go:183] Ignoring node capz-vo7bw1-md-0-tz559 because it is not ready: [{MemoryPressure False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 09:36:40 +0000 UTC 2022-09-04 09:36:40 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-vo7bw1-md-0-tz559" not found]}]
I0904 09:36:49.696118       1 topologycache.go:183] Ignoring node capz-vo7bw1-md-0-7wvgh because it is not ready: [{MemoryPressure False 2022-09-04 09:36:49 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 09:36:49 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 09:36:49 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 09:36:49 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-vo7bw1-md-0-7wvgh" not found]}]
I0904 09:36:49.696147       1 topologycache.go:179] Ignoring node capz-vo7bw1-control-plane-96q6v because it has an excluded label
I0904 09:36:49.696157       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0904 09:36:49.696185       1 controller.go:690] Syncing backends for all LB services.
I0904 09:36:49.696197       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 09:36:49.696208       1 controller.go:753] Finished updateLoadBalancerHosts
I0904 09:36:49.696213       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 09:36:49.696220       1 controller.go:686] It took 3.61e-05 seconds to finish syncNodes
I0904 09:36:49.696242       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-md-0-7wvgh"
W0904 09:36:49.696257       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-vo7bw1-md-0-7wvgh" does not exist
I0904 09:36:49.696317       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-vo7bw1-md-0-7wvgh}
I0904 09:36:49.696332       1 taint_manager.go:471] "Updating known taints on node" node="capz-vo7bw1-md-0-7wvgh" taints=[]
I0904 09:36:49.697460       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a4b2e199847, ext:95068527991, loc:(*time.Location)(0x6f10040)}}
I0904 09:36:49.697570       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a4c69940103, ext:99992664015, loc:(*time.Location)(0x6f10040)}}
I0904 09:36:49.697593       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-vo7bw1-md-0-7wvgh], creating 1
I0904 09:36:49.698427       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd3a4b2fa25ad9, ext:95094267913, loc:(*time.Location)(0x6f10040)}}
... skipping 311 lines ...
I0904 09:37:11.388295       1 controller.go:728] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0904 09:37:11.388402       1 controller.go:753] Finished updateLoadBalancerHosts
I0904 09:37:11.388509       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0904 09:37:11.388714       1 controller.go:686] It took 0.000633604 seconds to finish syncNodes
I0904 09:37:11.388089       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-md-0-tz559"
I0904 09:37:11.388163       1 topologycache.go:179] Ignoring node capz-vo7bw1-control-plane-96q6v because it has an excluded label
I0904 09:37:11.389069       1 topologycache.go:183] Ignoring node capz-vo7bw1-md-0-7wvgh because it is not ready: [{MemoryPressure False 2022-09-04 09:37:10 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 09:37:10 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 09:37:10 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 09:37:10 +0000 UTC 2022-09-04 09:36:49 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0904 09:37:11.389221       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0904 09:37:11.389114       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-vo7bw1-md-0-tz559"
I0904 09:37:11.402280       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-md-0-tz559"
I0904 09:37:11.403454       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-vo7bw1-md-0-tz559" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0904 09:37:11.433750       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-vo7bw1/providers/Microsoft.Compute/virtualMachines/capz-vo7bw1-md-0-7wvgh), assuming it is managed by availability set: not a vmss instance
I0904 09:37:11.433988       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-vo7bw1/providers/Microsoft.Compute/virtualMachines/capz-vo7bw1-md-0-7wvgh), assuming it is managed by availability set: not a vmss instance
... skipping 15 lines ...
I0904 09:37:12.314431       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a5212bc5379, ext:122609430697, loc:(*time.Location)(0x6f10040)}}
I0904 09:37:12.314520       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd3a5212bf23ca, ext:122609614998, loc:(*time.Location)(0x6f10040)}}
I0904 09:37:12.314612       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0904 09:37:12.314728       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0904 09:37:12.314804       1 daemon_controller.go:1119] Updating daemon set status
I0904 09:37:12.315039       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (1.686509ms)
I0904 09:37:12.882014       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-vo7bw1-md-0-tz559 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 09:37:01 +0000 UTC,LastTransitionTime:2022-09-04 09:36:40 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 09:37:11 +0000 UTC,LastTransitionTime:2022-09-04 09:37:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 09:37:12.882127       1 node_lifecycle_controller.go:1092] Node capz-vo7bw1-md-0-tz559 ReadyCondition updated. Updating timestamp.
I0904 09:37:12.895434       1 node_lifecycle_controller.go:938] Node capz-vo7bw1-md-0-tz559 is healthy again, removing all taints
I0904 09:37:12.896926       1 node_lifecycle_controller.go:1092] Node capz-vo7bw1-md-0-7wvgh ReadyCondition updated. Updating timestamp.
I0904 09:37:12.896235       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-md-0-tz559"
I0904 09:37:12.896795       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-vo7bw1-md-0-tz559}
I0904 09:37:12.898229       1 node_lifecycle_controller.go:1259] Controller detected that zone eastus::0 is now in state Normal.
... skipping 73 lines ...
I0904 09:37:20.473812       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-vo7bw1-md-0-7wvgh" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0904 09:37:22.716872       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 09:37:22.717994       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0904 09:37:22.772365       1 pv_controller_base.go:612] resyncing PV controller
I0904 09:37:22.816427       1 gc_controller.go:221] GC'ing orphaned
I0904 09:37:22.816484       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0904 09:37:22.899995       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-vo7bw1-md-0-7wvgh transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 09:37:10 +0000 UTC,LastTransitionTime:2022-09-04 09:36:49 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 09:37:20 +0000 UTC,LastTransitionTime:2022-09-04 09:37:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 09:37:22.900062       1 node_lifecycle_controller.go:1092] Node capz-vo7bw1-md-0-7wvgh ReadyCondition updated. Updating timestamp.
I0904 09:37:22.912698       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-vo7bw1-md-0-7wvgh"
I0904 09:37:22.913906       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-vo7bw1-md-0-7wvgh}
I0904 09:37:22.913936       1 taint_manager.go:471] "Updating known taints on node" node="capz-vo7bw1-md-0-7wvgh" taints=[]
I0904 09:37:22.914626       1 node_lifecycle_controller.go:938] Node capz-vo7bw1-md-0-7wvgh is healthy again, removing all taints
I0904 09:37:22.914798       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-vo7bw1-md-0-7wvgh"
... skipping 7 lines ...
I0904 09:37:24.658080       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:2, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bd3a5527397087, ext:134953174355, loc:(*time.Location)(0x6f10040)}}
I0904 09:37:24.658153       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7847f46f86" need=2 creating=2
I0904 09:37:24.658284       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7847f46f86 to 2"
I0904 09:37:24.675705       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-04 09:37:24.657794381 +0000 UTC m=+134.952892953 - now: 2022-09-04 09:37:24.675685278 +0000 UTC m=+134.970783850]
I0904 09:37:24.676380       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0904 09:37:24.691042       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="45.193643ms"
I0904 09:37:24.691097       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:37:24.691167       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-04 09:37:24.691126461 +0000 UTC m=+134.986225033"
I0904 09:37:24.693118       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-04 09:37:24 +0000 UTC - now: 2022-09-04 09:37:24.693107671 +0000 UTC m=+134.988206243]
I0904 09:37:24.714353       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0904 09:37:24.715774       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-mk527" podUID=b5887a08-076d-463a-90c7-4e33450f3c03
I0904 09:37:24.715799       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-mk527"
I0904 09:37:24.715833       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-mk527, PodDisruptionBudget controller will avoid syncing.
I0904 09:37:24.715839       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-mk527"
I0904 09:37:24.716024       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-mk527"
I0904 09:37:24.716345       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="25.160235ms"
I0904 09:37:24.716379       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-04 09:37:24.716364096 +0000 UTC m=+135.011462768"
I0904 09:37:24.715861       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-mk527 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-mk527", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"b5887a08-076d-463a-90c7-4e33450f3c03", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2022, time.September, 4, 9, 37, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"f0b471d0-e47c-48f7-a512-7a1b683f1a71", Controller:(*bool)(0xc00195afc7), BlockOwnerDeletion:(*bool)(0xc00195afc8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 4, 9, 37, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0016ed368), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0016ed380), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0016ed398), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-ndqqf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00170e8a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ndqqf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ndqqf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ndqqf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ndqqf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ndqqf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00170e9c0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-ndqqf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0020f53c0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00195b390), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0004384d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00195b400)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00195b420)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc00195b428), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00195b42c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0023eb000), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 09:37:24.716419       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bd3a5527397087, ext:134953174355, loc:(*time.Location)(0x6f10040)}}
I0904 09:37:24.716560       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-mk527
I0904 09:37:24.717031       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-mk527"
I0904 09:37:24.717985       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-04 09:37:24 +0000 UTC - now: 2022-09-04 09:37:24.717976505 +0000 UTC m=+135.013075177]
I0904 09:37:24.718019       1 progress.go:195] Queueing up deployment "csi-azurefile-controller" for a progress check after 599s
I0904 09:37:24.718053       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="1.664909ms"
... skipping 6 lines ...
I0904 09:37:24.733340       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-7847f46f86, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 09:37:24.733491       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-2pdnq"
I0904 09:37:24.744022       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-2pdnq" podUID=ece2ec52-a6f5-4be0-83ff-ecfdee5f4715
I0904 09:37:24.744059       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-2pdnq"
I0904 09:37:24.744645       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-2pdnq, PodDisruptionBudget controller will avoid syncing.
I0904 09:37:24.744666       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-2pdnq"
I0904 09:37:24.744741       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-2pdnq created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-2pdnq", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"ece2ec52-a6f5-4be0-83ff-ecfdee5f4715", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2022, time.September, 4, 9, 37, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"f0b471d0-e47c-48f7-a512-7a1b683f1a71", Controller:(*bool)(0xc001ac42c7), BlockOwnerDeletion:(*bool)(0xc001ac42c8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 4, 9, 37, 24, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0019946f0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc001994708), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001994720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-7t6hx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00170f360), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7t6hx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7t6hx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7t6hx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7t6hx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7t6hx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00170f520)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-7t6hx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002478c40), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ac4680), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0001f5810), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ac46f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001ac4710)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001ac4718), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001ac471c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0024e09e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 09:37:24.745339       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bd3a5527397087, ext:134953174355, loc:(*time.Location)(0x6f10040)}}
I0904 09:37:24.746574       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-2pdnq"
I0904 09:37:24.776340       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-azurefile-controller-7847f46f86"
I0904 09:37:24.776392       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-04 09:37:24.776375119 +0000 UTC m=+135.071473691"
I0904 09:37:24.778252       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-04 09:37:24 +0000 UTC - now: 2022-09-04 09:37:24.778221629 +0000 UTC m=+135.073320201]
I0904 09:37:24.778301       1 progress.go:195] Queueing up deployment "csi-azurefile-controller" for a progress check after 599s
... skipping 213 lines ...
I0904 09:37:27.899244       1 replica_set.go:394] Pod csi-snapshot-controller-84ccd6c756-4t6ln created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-snapshot-controller-84ccd6c756-4t6ln", GenerateName:"csi-snapshot-controller-84ccd6c756-", Namespace:"kube-system", SelfLink:"", UID:"7337da87-b35a-4ea3-b0a0-745faf124fe5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2022, time.September, 4, 9, 37, 27, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-snapshot-controller", "pod-template-hash":"84ccd6c756"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-snapshot-controller-84ccd6c756", UID:"0beeb403-e0e8-456b-a205-0ad7629fe0c9", Controller:(*bool)(0xc001230187), BlockOwnerDeletion:(*bool)(0xc001230188)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 4, 9, 37, 27, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002a27d70), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-mjsv9", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002845600), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-snapshot-controller", Image:"mcr.microsoft.com/oss/kubernetes-csi/snapshot-controller:v5.0.1", Command:[]string(nil), Args:[]string{"--v=2", "--leader-election=true", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-mjsv9", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001230228), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-snapshot-controller-sa", DeprecatedServiceAccount:"csi-snapshot-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00025e9a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Equal", Value:"true", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0012302b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0012302d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc0012302d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0012302dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001ca39e0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 09:37:27.899571       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-snapshot-controller-84ccd6c756", timestamp:time.Time{wall:0xc0bd3a55f4723570, ext:138174998688, loc:(*time.Location)(0x6f10040)}}
I0904 09:37:27.899634       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-snapshot-controller-84ccd6c756-4t6ln"
I0904 09:37:27.900466       1 controller_utils.go:581] Controller csi-snapshot-controller-84ccd6c756 created pod csi-snapshot-controller-84ccd6c756-4t6ln
I0904 09:37:27.901048       1 event.go:294] "Event occurred" object="kube-system/csi-snapshot-controller-84ccd6c756" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-snapshot-controller-84ccd6c756-4t6ln"
I0904 09:37:27.904034       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="41.295123ms"
I0904 09:37:27.904068       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:37:27.904109       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-04 09:37:27.904093246 +0000 UTC m=+138.199191818"
I0904 09:37:27.904533       1 deployment_util.go:775] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-04 09:37:27 +0000 UTC - now: 2022-09-04 09:37:27.904524048 +0000 UTC m=+138.199622720]
I0904 09:37:27.911490       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-snapshot-controller-84ccd6c756-52k5r" podUID=8733d4ca-6ac6-4d28-8a70-0734028e765a
I0904 09:37:27.911648       1 disruption.go:479] addPod called on pod "csi-snapshot-controller-84ccd6c756-52k5r"
I0904 09:37:27.911907       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-snapshot-controller-84ccd6c756-52k5r, PodDisruptionBudget controller will avoid syncing.
I0904 09:37:27.912015       1 disruption.go:482] No matching pdb for pod "csi-snapshot-controller-84ccd6c756-52k5r"
... skipping 1474 lines ...
I0904 09:41:44.700267       1 disruption.go:482] No matching pdb for pod "azurefile-volume-tester-t7tc8-56b6779f87-ht4tk"
I0904 09:41:44.700519       1 replica_set.go:394] Pod azurefile-volume-tester-t7tc8-56b6779f87-ht4tk created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azurefile-volume-tester-t7tc8-56b6779f87-ht4tk", GenerateName:"azurefile-volume-tester-t7tc8-56b6779f87-", Namespace:"azurefile-5356", SelfLink:"", UID:"cba25ffb-5f93-46ae-84bf-7cd446682750", ResourceVersion:"1956", Generation:0, CreationTimestamp:time.Date(2022, time.September, 4, 9, 41, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azurefile-volume-tester-5018949295715050020", "pod-template-hash":"56b6779f87"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azurefile-volume-tester-t7tc8-56b6779f87", UID:"18f26daf-9e31-477e-ab93-a5ce30652e3b", Controller:(*bool)(0xc0020dc5a7), BlockOwnerDeletion:(*bool)(0xc0020dc5a8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 4, 9, 41, 44, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00244a468), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc00244a480), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-xhv7v", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002c575a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 100; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-xhv7v", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020dc678), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00085a150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020dc6b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020dc6d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020dc6d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020dc6dc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0021078d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0904 09:41:44.701109       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-t7tc8-56b6779f87", timestamp:time.Time{wall:0xc0bd3a962866d0a0, ext:394972925392, loc:(*time.Location)(0x6f10040)}}
I0904 09:41:44.701358       1 taint_manager.go:431] "Noticed pod update" pod="azurefile-5356/azurefile-volume-tester-t7tc8-56b6779f87-ht4tk"
I0904 09:41:44.701852       1 event.go:294] "Event occurred" object="azurefile-5356/azurefile-volume-tester-t7tc8-56b6779f87" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: azurefile-volume-tester-t7tc8-56b6779f87-ht4tk"
I0904 09:41:44.713790       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-t7tc8" duration="51.89018ms"
I0904 09:41:44.714067       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-t7tc8" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-t7tc8\": the object has been modified; please apply your changes to the latest version and try again"
I0904 09:41:44.714240       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-t7tc8" startTime="2022-09-04 09:41:44.714216816 +0000 UTC m=+395.009315388"
I0904 09:41:44.714791       1 deployment_util.go:775] Deployment "azurefile-volume-tester-t7tc8" timed out (false) [last progress check: 2022-09-04 09:41:44 +0000 UTC - now: 2022-09-04 09:41:44.714781119 +0000 UTC m=+395.009879691]
I0904 09:41:44.720531       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azurefile-5356/azurefile-volume-tester-t7tc8-56b6779f87"
I0904 09:41:44.720763       1 disruption.go:494] updatePod called on pod "azurefile-volume-tester-t7tc8-56b6779f87-ht4tk"
I0904 09:41:44.720893       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-t7tc8-56b6779f87-ht4tk, PodDisruptionBudget controller will avoid syncing.
I0904 09:41:44.721027       1 disruption.go:497] No matching pdb for pod "azurefile-volume-tester-t7tc8-56b6779f87-ht4tk"
... skipping 1226 lines ...
I0904 09:44:25.285183       1 namespace_controller.go:180] Finished syncing namespace "azurefile-3410" (48.1µs)
2022/09/04 09:44:25 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 315.781 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-g7dww, container manager
STEP: Dumping workload cluster default/capz-vo7bw1 logs
Sep  4 09:45:57.291: INFO: Collecting logs for Linux node capz-vo7bw1-control-plane-96q6v in cluster capz-vo7bw1 in namespace default

Sep  4 09:46:57.292: INFO: Collecting boot logs for AzureMachine capz-vo7bw1-control-plane-96q6v

Failed to get logs for machine capz-vo7bw1-control-plane-pq9pc, cluster default/capz-vo7bw1: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 09:46:58.201: INFO: Collecting logs for Linux node capz-vo7bw1-md-0-tz559 in cluster capz-vo7bw1 in namespace default

Sep  4 09:47:58.205: INFO: Collecting boot logs for AzureMachine capz-vo7bw1-md-0-tz559

Failed to get logs for machine capz-vo7bw1-md-0-7444dfcbd4-snpgb, cluster default/capz-vo7bw1: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  4 09:47:58.544: INFO: Collecting logs for Linux node capz-vo7bw1-md-0-7wvgh in cluster capz-vo7bw1 in namespace default

Sep  4 09:48:58.546: INFO: Collecting boot logs for AzureMachine capz-vo7bw1-md-0-7wvgh

Failed to get logs for machine capz-vo7bw1-md-0-7444dfcbd4-t9wfs, cluster default/capz-vo7bw1: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-vo7bw1 kube-system pod logs
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-btmh6, container coredns
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-mk527, container csi-attacher
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-2pdnq, container azurefile
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-84ccd6c756-4t6ln, container csi-snapshot-controller
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-2pdnq, container csi-provisioner
... skipping 35 lines ...
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-vo7bw1-control-plane-96q6v, container kube-controller-manager
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-5j5dx, container azurefile
STEP: Creating log watcher for controller kube-system/metrics-server-76f7667fbf-bm545, container metrics-server
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-gbsmk
STEP: Creating log watcher for controller kube-system/kube-proxy-5mgfw, container kube-proxy
STEP: Collecting events for Pod kube-system/etcd-capz-vo7bw1-control-plane-96q6v
STEP: failed to find events of Pod "etcd-capz-vo7bw1-control-plane-96q6v"
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-5vsjm, container azurefile
STEP: Collecting events for Pod kube-system/kube-proxy-5mgfw
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-s284d, container liveness-probe
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-vo7bw1-control-plane-96q6v, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-vo7bw1-control-plane-96q6v, container kube-apiserver
STEP: Creating log watcher for controller kube-system/etcd-capz-vo7bw1-control-plane-96q6v, container etcd
... skipping 26 lines ...