This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-04 23:43
Elapsed38m16s
Revision
uploadercrier
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 705 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 134 lines ...
# Wait for the kubeconfig to become available.
timeout --foreground 300 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets | grep capz-auh8ae-kubeconfig; do sleep 1; done"
capz-auh8ae-kubeconfig                 cluster.x-k8s.io/secret   1      0s
# Get kubeconfig and store it locally.
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 get secrets capz-auh8ae-kubeconfig -o json | jq -r .data.value | base64 --decode > ./kubeconfig
timeout --foreground 600 bash -c "while ! /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig get nodes | grep control-plane; do sleep 1; done"
error: the server doesn't have a resource type "nodes"
No resources found
capz-auh8ae-control-plane-ffdm6   NotReady   control-plane   1s    v1.26.0-alpha.0.378+bcea98234f0fdc
run "/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kubectl-v1.22.4 --kubeconfig=./kubeconfig ..." to work with the new target cluster
make[1]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Waiting for 1 control plane machine(s), 2 worker machine(s), and  windows machine(s) to become Ready
node/capz-auh8ae-control-plane-ffdm6 condition met
... skipping 54 lines ...
Pre-Provisioned 
  should use a pre-provisioned volume and mount it as readOnly in a pod [file.csi.azure.com] [Windows]
  /home/prow/go/src/sigs.k8s.io/azurefile-csi-driver/test/e2e/pre_provisioning_test.go:77
STEP: Creating a kubernetes client
Sep  5 00:04:33.356: INFO: >>> kubeConfig: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/kubeconfig
STEP: Building a namespace api object, basename azurefile
Sep  5 00:04:34.077: INFO: Error listing PodSecurityPolicies; assuming PodSecurityPolicy is disabled: the server could not find the requested resource
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
2022/09/05 00:04:34 Check driver pods if restarts ...
check the driver pods if restarts ...
======================================================================================
2022/09/05 00:04:35 Check successfully
... skipping 180 lines ...
Sep  5 00:05:08.143: INFO: PersistentVolumeClaim pvc-wnz9g found but phase is Pending instead of Bound.
Sep  5 00:05:10.253: INFO: PersistentVolumeClaim pvc-wnz9g found and phase=Bound (25.416004743s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  5 00:05:10.578: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-mhq22" in namespace "azurefile-5194" to be "Succeeded or Failed"
Sep  5 00:05:10.685: INFO: Pod "azurefile-volume-tester-mhq22": Phase="Pending", Reason="", readiness=false. Elapsed: 107.003563ms
Sep  5 00:05:12.799: INFO: Pod "azurefile-volume-tester-mhq22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22160651s
Sep  5 00:05:14.914: INFO: Pod "azurefile-volume-tester-mhq22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.336056519s
Sep  5 00:05:17.029: INFO: Pod "azurefile-volume-tester-mhq22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.451143426s
STEP: Saw pod success
Sep  5 00:05:17.029: INFO: Pod "azurefile-volume-tester-mhq22" satisfied condition "Succeeded or Failed"
Sep  5 00:05:17.029: INFO: deleting Pod "azurefile-5194"/"azurefile-volume-tester-mhq22"
Sep  5 00:05:17.156: INFO: Pod azurefile-volume-tester-mhq22 has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-mhq22 in namespace azurefile-5194
Sep  5 00:05:17.276: INFO: deleting PVC "azurefile-5194"/"pvc-wnz9g"
Sep  5 00:05:17.276: INFO: Deleting PersistentVolumeClaim "pvc-wnz9g"
... skipping 155 lines ...
Sep  5 00:07:11.730: INFO: PersistentVolumeClaim pvc-tc4cp found but phase is Pending instead of Bound.
Sep  5 00:07:13.839: INFO: PersistentVolumeClaim pvc-tc4cp found and phase=Bound (21.193040353s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  5 00:07:14.166: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-9mhf8" in namespace "azurefile-156" to be "Error status code"
Sep  5 00:07:14.273: INFO: Pod "azurefile-volume-tester-9mhf8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.260853ms
Sep  5 00:07:16.386: INFO: Pod "azurefile-volume-tester-9mhf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220704758s
Sep  5 00:07:18.501: INFO: Pod "azurefile-volume-tester-9mhf8": Phase="Failed", Reason="", readiness=false. Elapsed: 4.335275059s
STEP: Saw pod failure
Sep  5 00:07:18.501: INFO: Pod "azurefile-volume-tester-9mhf8" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  5 00:07:18.613: INFO: deleting Pod "azurefile-156"/"azurefile-volume-tester-9mhf8"
Sep  5 00:07:18.724: INFO: Pod azurefile-volume-tester-9mhf8 has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-9mhf8 in namespace azurefile-156
Sep  5 00:07:18.842: INFO: deleting PVC "azurefile-156"/"pvc-tc4cp"
... skipping 179 lines ...
Sep  5 00:09:17.200: INFO: PersistentVolumeClaim pvc-7q5f5 found but phase is Pending instead of Bound.
Sep  5 00:09:19.308: INFO: PersistentVolumeClaim pvc-7q5f5 found and phase=Bound (2.213916785s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  5 00:09:19.633: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-ddl55" in namespace "azurefile-2546" to be "Succeeded or Failed"
Sep  5 00:09:19.741: INFO: Pod "azurefile-volume-tester-ddl55": Phase="Pending", Reason="", readiness=false. Elapsed: 107.697802ms
Sep  5 00:09:21.854: INFO: Pod "azurefile-volume-tester-ddl55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221177941s
Sep  5 00:09:23.968: INFO: Pod "azurefile-volume-tester-ddl55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.334962363s
STEP: Saw pod success
Sep  5 00:09:23.968: INFO: Pod "azurefile-volume-tester-ddl55" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  5 00:09:55.428: INFO: deleting Pod "azurefile-2546"/"azurefile-volume-tester-ddl55"
... skipping 728 lines ...
I0904 23:57:57.810868       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662335877\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662335877\" (2022-09-04 22:57:57 +0000 UTC to 2023-09-04 22:57:57 +0000 UTC (now=2022-09-04 23:57:57.810836089 +0000 UTC))"
I0904 23:57:57.810913       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0904 23:57:57.811753       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0904 23:57:57.812096       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/etc/kubernetes/pki/front-proxy-ca.crt"
I0904 23:57:57.813190       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
I0904 23:57:57.814093       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E0904 23:58:02.657488       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0904 23:58:02.657523       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0904 23:58:05.307102       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0904 23:58:05.307600       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-auh8ae-control-plane-ffdm6_def1e497-8102-4849-95e8-1114c609ab41 became leader"
W0904 23:58:05.346184       1 plugins.go:131] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0904 23:58:05.346884       1 azure_auth.go:232] Using AzurePublicCloud environment
I0904 23:58:05.346937       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
I0904 23:58:05.347026       1 azure_interfaceclient.go:63] Azure InterfacesClient (read ops) using rate limit config: QPS=1, bucket=5
... skipping 29 lines ...
I0904 23:58:05.348580       1 reflector.go:221] Starting reflector *v1.Secret (23h55m9.355189029s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:58:05.348602       1 reflector.go:257] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:58:05.348890       1 reflector.go:221] Starting reflector *v1.ServiceAccount (23h55m9.355189029s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:58:05.348911       1 reflector.go:257] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:58:05.348902       1 reflector.go:221] Starting reflector *v1.Node (23h55m9.355189029s) from vendor/k8s.io/client-go/informers/factory.go:134
I0904 23:58:05.349233       1 reflector.go:257] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
W0904 23:58:05.371201       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0904 23:58:05.371354       1 controllermanager.go:573] Starting "persistentvolume-binder"
I0904 23:58:05.377537       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/host-path"
I0904 23:58:05.377565       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/nfs"
I0904 23:58:05.377579       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/glusterfs"
I0904 23:58:05.377592       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0904 23:58:05.377605       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0904 23:58:05.377619       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0904 23:58:05.377640       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 23:58:05.377662       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 23:58:05.377683       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 23:58:05.377698       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0904 23:58:05.377713       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0904 23:58:05.377755       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 23:58:05.377767       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 23:58:05.378049       1 controllermanager.go:602] Started "persistentvolume-binder"
I0904 23:58:05.378071       1 controllermanager.go:573] Starting "podgc"
I0904 23:58:05.378286       1 pv_controller_base.go:318] Starting persistent volume controller
I0904 23:58:05.378304       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0904 23:58:05.398626       1 controllermanager.go:602] Started "podgc"
... skipping 16 lines ...
I0904 23:58:05.418660       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0904 23:58:05.418901       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0904 23:58:05.418950       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0904 23:58:05.418967       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
I0904 23:58:05.418981       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0904 23:58:05.418996       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0904 23:58:05.419100       1 csi_plugin.go:257] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0904 23:58:05.419224       1 plugins.go:646] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0904 23:58:05.419460       1 controllermanager.go:602] Started "attachdetach"
I0904 23:58:05.419479       1 controllermanager.go:573] Starting "pv-protection"
I0904 23:58:05.419771       1 attach_detach_controller.go:328] Starting attach detach controller
I0904 23:58:05.419790       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0904 23:58:05.425539       1 controllermanager.go:602] Started "pv-protection"
... skipping 81 lines ...
I0904 23:58:06.061292       1 shared_informer.go:255] Waiting for caches to sync for ReplicaSet
I0904 23:58:06.212078       1 controllermanager.go:602] Started "cronjob"
I0904 23:58:06.212118       1 controllermanager.go:573] Starting "endpointslice"
I0904 23:58:06.212261       1 cronjob_controllerv2.go:135] "Starting cronjob controller v2"
I0904 23:58:06.212321       1 shared_informer.go:255] Waiting for caches to sync for cronjob
I0904 23:58:06.320561       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-control-plane-ffdm6"
W0904 23:58:06.321054       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-auh8ae-control-plane-ffdm6" does not exist
I0904 23:58:06.334387       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-control-plane-ffdm6"
I0904 23:58:06.409976       1 request.go:614] Waited for 75.177303ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts
I0904 23:58:06.413170       1 controllermanager.go:602] Started "endpointslice"
I0904 23:58:06.413198       1 controllermanager.go:573] Starting "namespace"
I0904 23:58:06.413271       1 endpointslice_controller.go:261] Starting endpoint slice controller
I0904 23:58:06.413285       1 shared_informer.go:255] Waiting for caches to sync for endpoint_slice
I0904 23:58:06.413365       1 topologycache.go:183] Ignoring node capz-auh8ae-control-plane-ffdm6 because it is not ready: [{MemoryPressure False 2022-09-04 23:58:06 +0000 UTC 2022-09-04 23:57:44 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-04 23:58:06 +0000 UTC 2022-09-04 23:57:44 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-04 23:58:06 +0000 UTC 2022-09-04 23:57:44 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-04 23:58:06 +0000 UTC 2022-09-04 23:57:44 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0904 23:58:06.413523       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0904 23:58:06.457808       1 request.go:614] Waited for 96.10538ms due to client-side throttling, not priority and fairness, request: POST:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/ttl-controller/token
I0904 23:58:06.469417       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-control-plane-ffdm6"
I0904 23:58:06.469777       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-auh8ae-control-plane-ffdm6" new_ttl="0s"
I0904 23:58:06.507498       1 request.go:614] Waited for 94.247749ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/api/v1/namespaces/kube-system/serviceaccounts/namespace-controller
I0904 23:58:06.573167       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-control-plane-ffdm6"
... skipping 503 lines ...
I0904 23:58:10.618252       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/coredns-84994b8c4" need=2 creating=2
I0904 23:58:10.618345       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/coredns-84994b8c4"
I0904 23:58:10.618782       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-84994b8c4 to 2"
I0904 23:58:10.624526       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0904 23:58:10.625068       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 23:58:10.617066555 +0000 UTC m=+14.339352602 - now: 2022-09-04 23:58:10.625056405 +0000 UTC m=+14.347342352]
I0904 23:58:10.629028       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="710.365909ms"
I0904 23:58:10.629217       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0904 23:58:10.629345       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 23:58:10.629252546 +0000 UTC m=+14.351538493"
I0904 23:58:10.629960       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 23:58:10 +0000 UTC - now: 2022-09-04 23:58:10.629953403 +0000 UTC m=+14.352239350]
I0904 23:58:10.634306       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/coredns"
I0904 23:58:10.634639       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/coredns" duration="5.375937ms"
I0904 23:58:10.634882       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-04 23:58:10.634864202 +0000 UTC m=+14.357150149"
I0904 23:58:10.635465       1 deployment_util.go:775] Deployment "coredns" timed out (false) [last progress check: 2022-09-04 23:58:10 +0000 UTC - now: 2022-09-04 23:58:10.63545885 +0000 UTC m=+14.357744897]
... skipping 232 lines ...
I0904 23:58:45.797453       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-kube-controllers-755ff8d7b5-dv7c8, PodDisruptionBudget controller will avoid syncing.
I0904 23:58:45.797595       1 disruption.go:482] No matching pdb for pod "calico-kube-controllers-755ff8d7b5-dv7c8"
I0904 23:58:45.797743       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-755ff8d7b5-dv7c8" podUID=5a23f2f4-6902-4607-9ac5-cfe7d67ef009
I0904 23:58:45.797980       1 controller_utils.go:581] Controller calico-kube-controllers-755ff8d7b5 created pod calico-kube-controllers-755ff8d7b5-dv7c8
I0904 23:58:45.798161       1 replica_set_utils.go:59] Updating status for : kube-system/calico-kube-controllers-755ff8d7b5, replicas 0->0 (need 1), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0904 23:58:45.798555       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="42.423224ms"
I0904 23:58:45.798726       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0904 23:58:45.798896       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-04 23:58:45.798877266 +0000 UTC m=+49.521163313"
I0904 23:58:45.799442       1 deployment_util.go:775] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-04 23:58:45 +0000 UTC - now: 2022-09-04 23:58:45.799433736 +0000 UTC m=+49.521719683]
I0904 23:58:45.799834       1 event.go:294] "Event occurred" object="kube-system/calico-kube-controllers-755ff8d7b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: calico-kube-controllers-755ff8d7b5-dv7c8"
I0904 23:58:45.812443       1 replica_set.go:457] Pod calico-kube-controllers-755ff8d7b5-dv7c8 updated, objectMeta {Name:calico-kube-controllers-755ff8d7b5-dv7c8 GenerateName:calico-kube-controllers-755ff8d7b5- Namespace:kube-system SelfLink: UID:5a23f2f4-6902-4607-9ac5-cfe7d67ef009 ResourceVersion:478 Generation:0 CreationTimestamp:2022-09-04 23:58:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:755ff8d7b5] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-755ff8d7b5 UID:14bf5edf-cbbe-400b-8298-bedcfcb6d3fd Controller:0xc00222931e BlockOwnerDeletion:0xc00222931f}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:58:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14bf5edf-cbbe-400b-8298-bedcfcb6d3fd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:calico-kube-controllers-755ff8d7b5-dv7c8 GenerateName:calico-kube-controllers-755ff8d7b5- Namespace:kube-system SelfLink: UID:5a23f2f4-6902-4607-9ac5-cfe7d67ef009 ResourceVersion:480 Generation:0 CreationTimestamp:2022-09-04 23:58:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:calico-kube-controllers pod-template-hash:755ff8d7b5] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:calico-kube-controllers-755ff8d7b5 UID:14bf5edf-cbbe-400b-8298-bedcfcb6d3fd Controller:0xc00233c297 BlockOwnerDeletion:0xc00233c298}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:58:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"14bf5edf-cbbe-400b-8298-bedcfcb6d3fd\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"calico-kube-controllers\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"DATASTORE_TYPE\"}":{".":{},"f:name":{},"f:value":{}},"k:{\"name\":\"ENABLED_CONTROLLERS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:readinessProbe":{".":{},"f:exec":{".":{},"f:command":{}},"f:failureThreshold":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 23:58:45 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status}]}.
I0904 23:58:45.813576       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/calico-kube-controllers-755ff8d7b5" (50.816374ms)
I0904 23:58:45.813997       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-kube-controllers-755ff8d7b5", timestamp:time.Time{wall:0xc0bd6ccd6d7abe26, ext:49485304837, loc:(*time.Location)(0x6f10040)}}
... skipping 415 lines ...
I0904 23:59:12.857269       1 endpoints_controller.go:369] Finished syncing service "kube-system/kube-dns" endpoints. (12.938408ms)
I0904 23:59:12.857551       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (8.480057ms)
I0904 23:59:12.857755       1 endpointslicemirroring_controller.go:278] syncEndpoints("kube-system/kube-dns")
I0904 23:59:12.858106       1 endpointslicemirroring_controller.go:313] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
I0904 23:59:12.858209       1 endpointslicemirroring_controller.go:275] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (468.848µs)
I0904 23:59:13.832274       1 endpointslice_controller.go:319] Finished syncing service "kube-system/kube-dns" endpoint slices. (382.138µs)
I0904 23:59:14.874047       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-auh8ae-control-plane-ffdm6 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-04 23:58:09 +0000 UTC,LastTransitionTime:2022-09-04 23:57:44 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-04 23:59:10 +0000 UTC,LastTransitionTime:2022-09-04 23:59:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0904 23:59:14.874168       1 node_lifecycle_controller.go:1092] Node capz-auh8ae-control-plane-ffdm6 ReadyCondition updated. Updating timestamp.
I0904 23:59:14.874199       1 node_lifecycle_controller.go:938] Node capz-auh8ae-control-plane-ffdm6 is healthy again, removing all taints
I0904 23:59:14.874229       1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0904 23:59:16.130759       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="107.711µs" userAgent="kube-probe/1.26+" audit-ID="" srcIP="127.0.0.1:47332" resp=200
I0904 23:59:20.788401       1 endpoints_controller.go:528] Update endpoints for kube-system/kube-dns, ready: 3 not ready: 3
I0904 23:59:20.788795       1 replica_set.go:457] Pod coredns-84994b8c4-hsql7 updated, objectMeta {Name:coredns-84994b8c4-hsql7 GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:011633c2-c19d-4525-92bf-226818057e2d ResourceVersion:604 Generation:0 CreationTimestamp:2022-09-04 23:58:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[cni.projectcalico.org/containerID:f89fe321e75399c28177a627a0fb1c325c112ad69c9c8498f99625734d75f7e1 cni.projectcalico.org/podIP:192.168.229.1/32 cni.projectcalico.org/podIPs:192.168.229.1/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:d09e3510-d277-4525-96d5-c65f7fa363e2 Controller:0xc00197f70f BlockOwnerDeletion:0xc00197f750}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:58:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d09e3510-d277-4525-96d5-c65f7fa363e2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 23:58:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 23:59:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 23:59:12 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.229.1\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]} -> {Name:coredns-84994b8c4-hsql7 GenerateName:coredns-84994b8c4- Namespace:kube-system SelfLink: UID:011633c2-c19d-4525-92bf-226818057e2d ResourceVersion:626 Generation:0 CreationTimestamp:2022-09-04 23:58:10 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:84994b8c4] Annotations:map[cni.projectcalico.org/containerID:f89fe321e75399c28177a627a0fb1c325c112ad69c9c8498f99625734d75f7e1 cni.projectcalico.org/podIP:192.168.229.1/32 cni.projectcalico.org/podIPs:192.168.229.1/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-84994b8c4 UID:d09e3510-d277-4525-96d5-c65f7fa363e2 Controller:0xc001739b00 BlockOwnerDeletion:0xc001739b01}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-04 23:58:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d09e3510-d277-4525-96d5-c65f7fa363e2\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-04 23:58:10 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-04 23:59:11 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-04 23:59:20 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.229.1\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
... skipping 117 lines ...
I0905 00:00:04.366292       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0905 00:00:04.374517       1 controller.go:686] It took 0.008291019 seconds to finish syncNodes
I0905 00:00:04.368821       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6cc5b75356cc, ext:18650494635, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:04.374612       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6ce116540d6d, ext:128096893260, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:04.374643       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-auh8ae-mp-0000001], creating 1
I0905 00:00:04.368850       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000001"
W0905 00:00:04.374997       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-auh8ae-mp-0000001" does not exist
I0905 00:00:04.371147       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6cd3da4865b5, ext:75163238292, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:04.375072       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6ce1165b1463, ext:128097353794, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:04.375094       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-auh8ae-mp-0000001], creating 1
I0905 00:00:04.371223       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-auh8ae-mp-0000001}
I0905 00:00:04.375427       1 taint_manager.go:471] "Updating known taints on node" node="capz-auh8ae-mp-0000001" taints=[]
I0905 00:00:04.371244       1 topologycache.go:183] Ignoring node capz-auh8ae-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-05 00:00:04 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-05 00:00:04 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-05 00:00:04 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-05 00:00:04 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0905 00:00:04.375456       1 topologycache.go:179] Ignoring node capz-auh8ae-control-plane-ffdm6 because it has an excluded label
I0905 00:00:04.375467       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0905 00:00:04.389219       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000001"
I0905 00:00:04.398150       1 ttl_controller.go:275] "Changed ttl annotation" node="capz-auh8ae-mp-0000001" new_ttl="0s"
I0905 00:00:04.411530       1 controller_utils.go:581] Controller calico-node created pod calico-node-wjtdj
I0905 00:00:04.411855       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
... skipping 198 lines ...
I0905 00:00:18.200209       1 topologycache.go:179] Ignoring node capz-auh8ae-control-plane-ffdm6 because it has an excluded label
I0905 00:00:18.205814       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6ce2dff9a519, ext:135258740372, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:18.200320       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-auh8ae-mp-0000000}
I0905 00:00:18.200592       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000000"
I0905 00:00:18.206892       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6ce1c27ab563, ext:130763882506, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:18.207172       1 controller.go:686] It took 0.007578043 seconds to finish syncNodes
I0905 00:00:18.207384       1 topologycache.go:183] Ignoring node capz-auh8ae-mp-0000001 because it is not ready: [{MemoryPressure False 2022-09-05 00:00:14 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-05 00:00:14 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-05 00:00:14 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-05 00:00:14 +0000 UTC 2022-09-05 00:00:04 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0905 00:00:18.207730       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6ce48c61971a, ext:141930008313, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:18.209073       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set kube-proxy: [capz-auh8ae-mp-0000000], creating 1
I0905 00:00:18.207928       1 taint_manager.go:471] "Updating known taints on node" node="capz-auh8ae-mp-0000000" taints=[]
W0905 00:00:18.207996       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-auh8ae-mp-0000000" does not exist
I0905 00:00:18.208428       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0bd6ce48c6c3ea4, ext:141930706563, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:18.210090       1 daemon_controller.go:974] Nodes needing daemon pods for daemon set calico-node: [capz-auh8ae-mp-0000000], creating 1
I0905 00:00:18.208862       1 topologycache.go:183] Ignoring node capz-auh8ae-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-05 00:00:17 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-05 00:00:17 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-05 00:00:17 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-05 00:00:17 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-auh8ae-mp-0000000" not found]}]
I0905 00:00:18.210855       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0905 00:00:18.215223       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000000"
I0905 00:00:18.233573       1 controller_utils.go:581] Controller kube-proxy created pod kube-proxy-hj2fl
I0905 00:00:18.233871       1 daemon_controller.go:1036] Pods to delete for daemon set kube-proxy: [], deleting 0
I0905 00:00:18.234051       1 controller_utils.go:195] Controller still waiting on expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0bd6ce48c61971a, ext:141930008313, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:18.234315       1 daemon_controller.go:1119] Updating daemon set status
... skipping 223 lines ...
I0905 00:00:35.433528       1 controller.go:753] Finished updateLoadBalancerHosts
I0905 00:00:35.433534       1 controller.go:694] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
I0905 00:00:35.434241       1 controller.go:686] It took 0.000785254 seconds to finish syncNodes
I0905 00:00:35.434030       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000001"
I0905 00:00:35.434132       1 topologycache.go:179] Ignoring node capz-auh8ae-control-plane-ffdm6 because it has an excluded label
I0905 00:00:35.434201       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-auh8ae-mp-0000001"
I0905 00:00:35.434288       1 topologycache.go:183] Ignoring node capz-auh8ae-mp-0000000 because it is not ready: [{MemoryPressure False 2022-09-05 00:00:28 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-05 00:00:28 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-05 00:00:28 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-05 00:00:28 +0000 UTC 2022-09-05 00:00:17 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0905 00:00:35.434767       1 topologycache.go:215] Insufficient node info for topology hints (1 zones, %!s(int64=2000) CPU, true)
I0905 00:00:35.447029       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-auh8ae-mp-0000001" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0905 00:00:35.447257       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000001"
I0905 00:00:35.473587       1 disruption.go:494] updatePod called on pod "calico-node-m6gft"
I0905 00:00:35.473846       1 disruption.go:570] No PodDisruptionBudgets found for pod calico-node-m6gft, PodDisruptionBudget controller will avoid syncing.
I0905 00:00:35.473867       1 disruption.go:497] No matching pdb for pod "calico-node-m6gft"
... skipping 53 lines ...
I0905 00:00:38.487068       1 daemon_controller.go:1119] Updating daemon set status
I0905 00:00:38.487170       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (3.16573ms)
I0905 00:00:38.704415       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000000"
I0905 00:00:38.957979       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-15oqcz" (9.4µs)
I0905 00:00:39.834404       1 reflector.go:281] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0905 00:00:39.887351       1 pv_controller_base.go:612] resyncing PV controller
I0905 00:00:39.887541       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-auh8ae-mp-0000001 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-05 00:00:14 +0000 UTC,LastTransitionTime:2022-09-05 00:00:04 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-05 00:00:35 +0000 UTC,LastTransitionTime:2022-09-05 00:00:35 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0905 00:00:39.887840       1 node_lifecycle_controller.go:1092] Node capz-auh8ae-mp-0000001 ReadyCondition updated. Updating timestamp.
I0905 00:00:39.910128       1 node_lifecycle_controller.go:938] Node capz-auh8ae-mp-0000001 is healthy again, removing all taints
I0905 00:00:39.911615       1 node_lifecycle_controller.go:1092] Node capz-auh8ae-mp-0000000 ReadyCondition updated. Updating timestamp.
I0905 00:00:39.910857       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000001"
I0905 00:00:39.911437       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-auh8ae-mp-0000001}
I0905 00:00:39.912379       1 taint_manager.go:471] "Updating known taints on node" node="capz-auh8ae-mp-0000001" taints=[]
... skipping 71 lines ...
I0905 00:00:49.515703       1 daemon_controller.go:1036] Pods to delete for daemon set calico-node: [], deleting 0
I0905 00:00:49.515737       1 daemon_controller.go:1119] Updating daemon set status
I0905 00:00:49.515797       1 daemon_controller.go:1179] Finished syncing daemon set "kube-system/calico-node" (2.324054ms)
I0905 00:00:49.590272       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000000"
I0905 00:00:49.905496       1 gc_controller.go:221] GC'ing orphaned
I0905 00:00:49.905533       1 gc_controller.go:290] GC'ing unscheduled pods which are terminating.
I0905 00:00:49.914066       1 node_lifecycle_controller.go:1084] ReadyCondition for Node capz-auh8ae-mp-0000000 transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-05 00:00:38 +0000 UTC,LastTransitionTime:2022-09-05 00:00:17 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-05 00:00:48 +0000 UTC,LastTransitionTime:2022-09-05 00:00:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0905 00:00:49.914113       1 node_lifecycle_controller.go:1092] Node capz-auh8ae-mp-0000000 ReadyCondition updated. Updating timestamp.
I0905 00:00:49.922279       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-auh8ae-mp-0000000"
I0905 00:00:49.922827       1 taint_manager.go:466] "Noticed node update" node={nodeName:capz-auh8ae-mp-0000000}
I0905 00:00:49.922860       1 taint_manager.go:471] "Updating known taints on node" node="capz-auh8ae-mp-0000000" taints=[]
I0905 00:00:49.922879       1 taint_manager.go:492] "All taints were removed from the node. Cancelling all evictions..." node="capz-auh8ae-mp-0000000"
I0905 00:00:49.923235       1 node_lifecycle_controller.go:938] Node capz-auh8ae-mp-0000000 is healthy again, removing all taints
... skipping 10 lines ...
I0905 00:00:57.148782       1 replica_set.go:577] "Too few replicas" replicaSet="kube-system/csi-azurefile-controller-7847f46f86" need=2 creating=2
I0905 00:00:57.149240       1 deployment_controller.go:222] "ReplicaSet added" replicaSet="kube-system/csi-azurefile-controller-7847f46f86"
I0905 00:00:57.150129       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set csi-azurefile-controller-7847f46f86 to 2"
I0905 00:00:57.157779       1 deployment_controller.go:183] "Updating deployment" deployment="kube-system/csi-azurefile-controller"
I0905 00:00:57.158087       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-05 00:00:57.149659231 +0000 UTC m=+180.871945178 - now: 2022-09-05 00:00:57.158075138 +0000 UTC m=+180.880361085]
I0905 00:00:57.170332       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="46.384748ms"
I0905 00:00:57.170368       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0905 00:00:57.170518       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-05 00:00:57.170499835 +0000 UTC m=+180.892785782"
I0905 00:00:57.171578       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-05 00:00:57 +0000 UTC - now: 2022-09-05 00:00:57.171571713 +0000 UTC m=+180.893857660]
I0905 00:00:57.174921       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-bbctn
I0905 00:00:57.175787       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-bbctn"
I0905 00:00:57.176129       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-bbctn created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-bbctn", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"cd1ee070-252f-4d7e-9598-8f3c4b6f1258", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2022, time.September, 5, 0, 0, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"18ef24ad-f1bc-4a8f-89fb-0944177e9951", Controller:(*bool)(0xc002a7f937), BlockOwnerDeletion:(*bool)(0xc002a7f938)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 5, 0, 0, 57, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028da468), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0028da480), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0028da498), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-bt64r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00266b700), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bt64r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bt64r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bt64r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bt64r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bt64r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00266b840)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-bt64r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0028ab8c0), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a7fce0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000fc700), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a7fd50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a7fd70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002a7fd78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a7fd7c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001ad8820), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0905 00:00:57.176733       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bd6cee48dd544b, ext:180871008710, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:57.179051       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-bbctn"
I0905 00:00:57.179144       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-bbctn"
I0905 00:00:57.179269       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-bbctn, PodDisruptionBudget controller will avoid syncing.
I0905 00:00:57.179350       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-bbctn"
I0905 00:00:57.179441       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-bbctn" podUID=cd1ee070-252f-4d7e-9598-8f3c4b6f1258
I0905 00:00:57.185863       1 controller_utils.go:581] Controller csi-azurefile-controller-7847f46f86 created pod csi-azurefile-controller-7847f46f86-rrkpj
I0905 00:00:57.186188       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-7847f46f86, replicas 0->0 (need 2), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I0905 00:00:57.186584       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-7847f46f86" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-7847f46f86-rrkpj"
I0905 00:00:57.188668       1 replica_set.go:394] Pod csi-azurefile-controller-7847f46f86-rrkpj created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-7847f46f86-rrkpj", GenerateName:"csi-azurefile-controller-7847f46f86-", Namespace:"kube-system", SelfLink:"", UID:"e76c1656-43ec-4776-b093-5320403e2dcc", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2022, time.September, 5, 0, 0, 57, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"7847f46f86"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-7847f46f86", UID:"18ef24ad-f1bc-4a8f-89fb-0944177e9951", Controller:(*bool)(0xc001f3c3f7), BlockOwnerDeletion:(*bool)(0xc001f3c3f8)}}, Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 5, 0, 0, 57, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0028da930), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0028da948), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0028da960), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-kmjcn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00266bd20), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-kmjcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-kmjcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-kmjcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-kmjcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-kmjcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00266be60)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-kmjcn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc000242700), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001f3c7a0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0000fccb0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f3c810)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001f3c830)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc001f3c838), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001f3c83c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001ad8c70), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0905 00:00:57.189914       1 taint_manager.go:431] "Noticed pod update" pod="kube-system/csi-azurefile-controller-7847f46f86-rrkpj"
I0905 00:00:57.189838       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bd6cee48dd544b, ext:180871008710, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:57.189804       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-7847f46f86-rrkpj" podUID=e76c1656-43ec-4776-b093-5320403e2dcc
I0905 00:00:57.189729       1 disruption.go:479] addPod called on pod "csi-azurefile-controller-7847f46f86-rrkpj"
I0905 00:00:57.191069       1 disruption.go:570] No PodDisruptionBudgets found for pod csi-azurefile-controller-7847f46f86-rrkpj, PodDisruptionBudget controller will avoid syncing.
I0905 00:00:57.191286       1 disruption.go:482] No matching pdb for pod "csi-azurefile-controller-7847f46f86-rrkpj"
... skipping 13 lines ...
I0905 00:00:57.206506       1 disruption.go:497] No matching pdb for pod "csi-azurefile-controller-7847f46f86-rrkpj"
I0905 00:00:57.210305       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="kube-system/csi-azurefile-controller-7847f46f86"
I0905 00:00:57.210841       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-azurefile-controller-7847f46f86" (62.386603ms)
I0905 00:00:57.211023       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-7847f46f86", timestamp:time.Time{wall:0xc0bd6cee48dd544b, ext:180871008710, loc:(*time.Location)(0x6f10040)}}
I0905 00:00:57.211313       1 replica_set_utils.go:59] Updating status for : kube-system/csi-azurefile-controller-7847f46f86, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0905 00:00:57.221337       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="23.652407ms"
I0905 00:00:57.221591       1 deployment_controller.go:497] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0905 00:00:57.221791       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-05 00:00:57.221771836 +0000 UTC m=+180.944057883"
I0905 00:00:57.223748       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-05 00:00:57 +0000 UTC - now: 2022-09-05 00:00:57.223721777 +0000 UTC m=+180.946007724]
I0905 00:00:57.223952       1 progress.go:195] Queueing up deployment "csi-azurefile-controller" for a progress check after 599s
I0905 00:00:57.224203       1 deployment_controller.go:585] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="2.418275ms"
I0905 00:00:57.227386       1 deployment_controller.go:583] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-05 00:00:57.22736684 +0000 UTC m=+180.949652887"
I0905 00:00:57.229276       1 deployment_util.go:775] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-05 00:00:57 +0000 UTC - now: 2022-09-05 00:00:57.229267977 +0000 UTC m=+180.951554024]
... skipping 1662 lines ...
I0905 00:07:28.698030       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-tfrcc-77484f4c49-hx5vn, PodDisruptionBudget controller will avoid syncing.
I0905 00:07:28.698169       1 disruption.go:482] No matching pdb for pod "azurefile-volume-tester-tfrcc-77484f4c49-hx5vn"
I0905 00:07:28.698328       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49-hx5vn" podUID=bc48dee0-8a81-47c5-90b0-5918cc239eb6
I0905 00:07:28.698627       1 pvc_protection_controller.go:149] "Processing PVC" PVC="azurefile-1563/pvc-5vqf8"
I0905 00:07:28.698733       1 pvc_protection_controller.go:152] "Finished processing PVC" PVC="azurefile-1563/pvc-5vqf8" duration="39.003µs"
I0905 00:07:28.698588       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" duration="19.851731ms"
I0905 00:07:28.698945       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-tfrcc\": the object has been modified; please apply your changes to the latest version and try again"
I0905 00:07:28.699128       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" startTime="2022-09-05 00:07:28.699109724 +0000 UTC m=+572.421395771"
I0905 00:07:28.699644       1 deployment_util.go:775] Deployment "azurefile-volume-tester-tfrcc" timed out (false) [last progress check: 2022-09-05 00:07:28 +0000 UTC - now: 2022-09-05 00:07:28.699635962 +0000 UTC m=+572.421921909]
I0905 00:07:28.704576       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49"
I0905 00:07:28.705321       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" duration="6.196847ms"
I0905 00:07:28.705512       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" startTime="2022-09-05 00:07:28.705453881 +0000 UTC m=+572.427739828"
I0905 00:07:28.705985       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49" (22.62273ms)
... skipping 8 lines ...
I0905 00:07:28.711946       1 disruption.go:497] No matching pdb for pod "azurefile-volume-tester-tfrcc-77484f4c49-hx5vn"
I0905 00:07:28.715456       1 deployment_controller.go:288] "ReplicaSet updated" replicaSet="azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49"
I0905 00:07:28.717679       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49" (11.537932ms)
I0905 00:07:28.717726       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49", timestamp:time.Time{wall:0xc0bd6d5028be4d0c, ext:572405846151, loc:(*time.Location)(0x6f10040)}}
I0905 00:07:28.717923       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49" (200.714µs)
I0905 00:07:28.720179       1 deployment_controller.go:585] "Finished syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" duration="14.712361ms"
I0905 00:07:28.720215       1 deployment_controller.go:497] "Error syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-tfrcc\": the object has been modified; please apply your changes to the latest version and try again"
I0905 00:07:28.720249       1 deployment_controller.go:583] "Started syncing deployment" deployment="azurefile-1563/azurefile-volume-tester-tfrcc" startTime="2022-09-05 00:07:28.720232546 +0000 UTC m=+572.442518593"
I0905 00:07:28.727130       1 replica_set.go:457] Pod azurefile-volume-tester-tfrcc-77484f4c49-hx5vn updated, objectMeta {Name:azurefile-volume-tester-tfrcc-77484f4c49-hx5vn GenerateName:azurefile-volume-tester-tfrcc-77484f4c49- Namespace:azurefile-1563 SelfLink: UID:bc48dee0-8a81-47c5-90b0-5918cc239eb6 ResourceVersion:2197 Generation:0 CreationTimestamp:2022-09-05 00:07:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5199948958991797301 pod-template-hash:77484f4c49] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-tfrcc-77484f4c49 UID:cacaa704-ddf7-4363-9226-6d636fb93558 Controller:0xc000af5f77 BlockOwnerDeletion:0xc000af5f78}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 00:07:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cacaa704-ddf7-4363-9226-6d636fb93558\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:}]} -> {Name:azurefile-volume-tester-tfrcc-77484f4c49-hx5vn GenerateName:azurefile-volume-tester-tfrcc-77484f4c49- Namespace:azurefile-1563 SelfLink: UID:bc48dee0-8a81-47c5-90b0-5918cc239eb6 ResourceVersion:2200 Generation:0 CreationTimestamp:2022-09-05 00:07:28 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:azurefile-volume-tester-5199948958991797301 pod-template-hash:77484f4c49] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:azurefile-volume-tester-tfrcc-77484f4c49 UID:cacaa704-ddf7-4363-9226-6d636fb93558 Controller:0xc00144fd17 BlockOwnerDeletion:0xc00144fd18}] Finalizers:[] ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-05 00:07:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cacaa704-ddf7-4363-9226-6d636fb93558\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"volume-tester\"}":{".":{},"f:args":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/mnt/test-1\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"test-volume-1\"}":{".":{},"f:name":{},"f:persistentVolumeClaim":{".":{},"f:claimName":{}}}}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-05 00:07:28 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0905 00:07:28.727639       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49", timestamp:time.Time{wall:0xc0bd6d5028be4d0c, ext:572405846151, loc:(*time.Location)(0x6f10040)}}
I0905 00:07:28.727980       1 replica_set.go:667] Finished syncing ReplicaSet "azurefile-1563/azurefile-volume-tester-tfrcc-77484f4c49" (468.334µs)
I0905 00:07:28.728260       1 disruption.go:494] updatePod called on pod "azurefile-volume-tester-tfrcc-77484f4c49-hx5vn"
I0905 00:07:28.728496       1 disruption.go:570] No PodDisruptionBudgets found for pod azurefile-volume-tester-tfrcc-77484f4c49-hx5vn, PodDisruptionBudget controller will avoid syncing.
... skipping 1205 lines ...
I0905 00:10:34.901299       1 namespaced_resources_deleter.go:502] namespace controller - deleteAllContent - namespace: azurefile-1393
2022/09/05 00:10:35 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 362.216 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 41 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-4hbxl, container manager
STEP: Dumping workload cluster default/capz-auh8ae logs
Sep  5 00:12:27.915: INFO: Collecting logs for Linux node capz-auh8ae-control-plane-ffdm6 in cluster capz-auh8ae in namespace default

Sep  5 00:13:27.917: INFO: Collecting boot logs for AzureMachine capz-auh8ae-control-plane-ffdm6

Failed to get logs for machine capz-auh8ae-control-plane-l29w2, cluster default/capz-auh8ae: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  5 00:13:29.390: INFO: Collecting logs for Linux node capz-auh8ae-mp-0000000 in cluster capz-auh8ae in namespace default

Sep  5 00:14:29.392: INFO: Collecting boot logs for VMSS instance 0 of scale set capz-auh8ae-mp-0

Sep  5 00:14:29.938: INFO: Collecting logs for Linux node capz-auh8ae-mp-0000001 in cluster capz-auh8ae in namespace default

Sep  5 00:15:29.940: INFO: Collecting boot logs for VMSS instance 1 of scale set capz-auh8ae-mp-0

Failed to get logs for machine pool capz-auh8ae-mp-0, cluster default/capz-auh8ae: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-auh8ae kube-system pod logs
STEP: Fetching kube-system pod logs took 1.063998131s
STEP: Dumping workload cluster default/capz-auh8ae Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-755ff8d7b5-dv7c8, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/coredns-84994b8c4-9qxs4
STEP: Creating log watcher for controller kube-system/csi-azurefile-node-bzgpv, container liveness-probe
... skipping 43 lines ...
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-auh8ae-control-plane-ffdm6, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-proxy-xjmq5, container kube-proxy
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-auh8ae-control-plane-ffdm6
STEP: Collecting events for Pod kube-system/calico-node-wjtdj
STEP: Creating log watcher for controller kube-system/coredns-84994b8c4-9qxs4, container coredns
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-7847f46f86-rrkpj, container azurefile
STEP: failed to find events of Pod "kube-controller-manager-capz-auh8ae-control-plane-ffdm6"
STEP: Fetching activity logs took 2.275026426s
================ REDACTING LOGS ================
All sensitive variables are redacted
cluster.cluster.x-k8s.io "capz-auh8ae" deleted
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/kind-v0.14.0 delete cluster --name=capz || true
Deleting cluster "capz" ...
... skipping 12 lines ...