This job view page is being replaced by Spyglass soon. Check out the new job view.
Resultsuccess
Tests 0 failed / 6 succeeded
Started2022-09-08 05:41
Elapsed36m5s
Revision
uploadercrier

No Test Failures!


Show 6 Passed Tests

Show 28 Skipped Tests

Error lines from build-log.txt

... skipping 624 lines ...
certificate.cert-manager.io "selfsigned-cert" deleted
# Create secret for AzureClusterIdentity
./hack/create-identity-secret.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
make[2]: Leaving directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
Error from server (NotFound): secrets "cluster-identity-secret" not found
secret/cluster-identity-secret created
secret/cluster-identity-secret labeled
# Create customized cloud provider configs
./hack/create-custom-cloud-provider-config.sh
make[2]: Entering directory '/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure'
make[2]: Nothing to be done for 'kubectl'.
... skipping 271 lines ...
Sep  8 06:00:44.802: INFO: PersistentVolumeClaim pvc-d77g5 found but phase is Pending instead of Bound.
Sep  8 06:00:46.906: INFO: PersistentVolumeClaim pvc-d77g5 found and phase=Bound (23.253288426s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  8 06:00:47.219: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-r8h8l" in namespace "azurefile-2540" to be "Succeeded or Failed"
Sep  8 06:00:47.321: INFO: Pod "azurefile-volume-tester-r8h8l": Phase="Pending", Reason="", readiness=false. Elapsed: 102.32353ms
Sep  8 06:00:49.432: INFO: Pod "azurefile-volume-tester-r8h8l": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212942284s
Sep  8 06:00:51.542: INFO: Pod "azurefile-volume-tester-r8h8l": Phase="Running", Reason="", readiness=false. Elapsed: 4.323795504s
Sep  8 06:00:53.653: INFO: Pod "azurefile-volume-tester-r8h8l": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.434740882s
STEP: Saw pod success
Sep  8 06:00:53.653: INFO: Pod "azurefile-volume-tester-r8h8l" satisfied condition "Succeeded or Failed"
Sep  8 06:00:53.653: INFO: deleting Pod "azurefile-2540"/"azurefile-volume-tester-r8h8l"
Sep  8 06:00:53.770: INFO: Pod azurefile-volume-tester-r8h8l has the following logs: hello world

STEP: Deleting pod azurefile-volume-tester-r8h8l in namespace azurefile-2540
Sep  8 06:00:53.886: INFO: deleting PVC "azurefile-2540"/"pvc-d77g5"
Sep  8 06:00:53.887: INFO: Deleting PersistentVolumeClaim "pvc-d77g5"
... skipping 157 lines ...
Sep  8 06:02:52.101: INFO: PersistentVolumeClaim pvc-trkd4 found but phase is Pending instead of Bound.
Sep  8 06:02:54.210: INFO: PersistentVolumeClaim pvc-trkd4 found and phase=Bound (25.364220526s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with an error
Sep  8 06:02:54.522: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-fflns" in namespace "azurefile-2790" to be "Error status code"
Sep  8 06:02:54.625: INFO: Pod "azurefile-volume-tester-fflns": Phase="Pending", Reason="", readiness=false. Elapsed: 102.563843ms
Sep  8 06:02:56.734: INFO: Pod "azurefile-volume-tester-fflns": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211460292s
Sep  8 06:02:58.842: INFO: Pod "azurefile-volume-tester-fflns": Phase="Failed", Reason="", readiness=false. Elapsed: 4.319994237s
STEP: Saw pod failure
Sep  8 06:02:58.842: INFO: Pod "azurefile-volume-tester-fflns" satisfied condition "Error status code"
STEP: checking that pod logs contain expected message
Sep  8 06:02:58.955: INFO: deleting Pod "azurefile-2790"/"azurefile-volume-tester-fflns"
Sep  8 06:02:59.061: INFO: Pod azurefile-volume-tester-fflns has the following logs: touch: /mnt/test-1/data: Read-only file system

STEP: Deleting pod azurefile-volume-tester-fflns in namespace azurefile-2790
Sep  8 06:02:59.175: INFO: deleting PVC "azurefile-2790"/"pvc-trkd4"
... skipping 180 lines ...
Sep  8 06:04:59.264: INFO: PersistentVolumeClaim pvc-d74xp found but phase is Pending instead of Bound.
Sep  8 06:05:01.369: INFO: PersistentVolumeClaim pvc-d74xp found and phase=Bound (2.206665927s)
STEP: checking the PVC
STEP: validating provisioned PV
STEP: checking the PV
STEP: deploying the pod
STEP: checking that the pods command exits with no error
Sep  8 06:05:01.683: INFO: Waiting up to 15m0s for pod "azurefile-volume-tester-f6drt" in namespace "azurefile-4538" to be "Succeeded or Failed"
Sep  8 06:05:01.786: INFO: Pod "azurefile-volume-tester-f6drt": Phase="Pending", Reason="", readiness=false. Elapsed: 102.488039ms
Sep  8 06:05:03.894: INFO: Pod "azurefile-volume-tester-f6drt": Phase="Running", Reason="", readiness=false. Elapsed: 2.21076216s
Sep  8 06:05:06.003: INFO: Pod "azurefile-volume-tester-f6drt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.319507516s
STEP: Saw pod success
Sep  8 06:05:06.003: INFO: Pod "azurefile-volume-tester-f6drt" satisfied condition "Succeeded or Failed"
STEP: resizing the pvc
STEP: sleep 30s waiting for resize complete
STEP: checking the resizing result
STEP: checking the resizing PV result
STEP: checking the resizing azurefile result
Sep  8 06:05:37.130: INFO: deleting Pod "azurefile-4538"/"azurefile-volume-tester-f6drt"
... skipping 867 lines ...
I0908 05:53:50.623427       1 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" certDetail="\"localhost@1662616429\" [serving] validServingFor=[127.0.0.1,127.0.0.1,localhost] issuer=\"localhost-ca@1662616429\" (2022-09-08 04:53:49 +0000 UTC to 2023-09-08 04:53:49 +0000 UTC (now=2022-09-08 05:53:50.623405461 +0000 UTC))"
I0908 05:53:50.623586       1 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopback" certDetail="\"apiserver-loopback-client@1662616430\" [serving] validServingFor=[apiserver-loopback-client] issuer=\"apiserver-loopback-client-ca@1662616430\" (2022-09-08 04:53:49 +0000 UTC to 2023-09-08 04:53:49 +0000 UTC (now=2022-09-08 05:53:50.623563461 +0000 UTC))"
I0908 05:53:50.623620       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
I0908 05:53:50.623848       1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0908 05:53:50.624448       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0908 05:53:50.624565       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
E0908 05:53:52.793111       1 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: leases.coordination.k8s.io "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-system"
I0908 05:53:52.793144       1 leaderelection.go:253] failed to acquire lease kube-system/kube-controller-manager
I0908 05:53:57.134143       1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager
I0908 05:53:57.134340       1 event.go:294] "Event occurred" object="kube-system/kube-controller-manager" fieldPath="" kind="Lease" apiVersion="coordination.k8s.io/v1" type="Normal" reason="LeaderElection" message="capz-ejsbt5-control-plane-nnz8j_ae680ec9-37d3-4692-84e6-0cd92cf23aac became leader"
I0908 05:53:57.242150       1 request.go:614] Waited for 95.809745ms due to client-side throttling, not priority and fairness, request: GET:https://10.0.0.4:6443/apis/flowcontrol.apiserver.k8s.io/v1beta2
W0908 05:53:57.244672       1 plugins.go:132] WARNING: azure built-in cloud provider is now deprecated. The Azure provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes-sigs/cloud-provider-azure
I0908 05:53:57.249823       1 azure_auth.go:232] Using AzurePublicCloud environment
I0908 05:53:57.250007       1 azure_auth.go:117] azure: using client_id+client_secret to retrieve access token
... skipping 30 lines ...
I0908 05:53:57.251671       1 reflector.go:255] Listing and watching *v1.Secret from vendor/k8s.io/client-go/informers/factory.go:134
I0908 05:53:57.251672       1 reflector.go:219] Starting reflector *v1.ServiceAccount (16h1m43.70833774s) from vendor/k8s.io/client-go/informers/factory.go:134
I0908 05:53:57.252120       1 reflector.go:255] Listing and watching *v1.ServiceAccount from vendor/k8s.io/client-go/informers/factory.go:134
I0908 05:53:57.251699       1 shared_informer.go:255] Waiting for caches to sync for tokens
I0908 05:53:57.251567       1 reflector.go:219] Starting reflector *v1.Node (16h1m43.70833774s) from vendor/k8s.io/client-go/informers/factory.go:134
I0908 05:53:57.253550       1 reflector.go:255] Listing and watching *v1.Node from vendor/k8s.io/client-go/informers/factory.go:134
W0908 05:53:57.270924       1 azure_config.go:53] Failed to get cloud-config from secret: failed to get secret azure-cloud-provider: secrets "azure-cloud-provider" is forbidden: User "system:serviceaccount:kube-system:azure-cloud-provider" cannot get resource "secrets" in API group "" in the namespace "kube-system", skip initializing from secret
I0908 05:53:57.270954       1 controllermanager.go:564] Starting "horizontalpodautoscaling"
I0908 05:53:57.281117       1 controllermanager.go:593] Started "horizontalpodautoscaling"
I0908 05:53:57.281138       1 controllermanager.go:564] Starting "csrsigning"
I0908 05:53:57.281326       1 horizontal.go:168] Starting HPA controller
I0908 05:53:57.281343       1 shared_informer.go:255] Waiting for caches to sync for HPA
I0908 05:53:57.286281       1 dynamic_serving_content.go:113] "Loaded a new cert/key pair" name="csr-controller::/etc/kubernetes/pki/ca.crt::/etc/kubernetes/pki/ca.key"
... skipping 70 lines ...
I0908 05:53:58.156144       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/aws-ebs"
I0908 05:53:58.156158       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
I0908 05:53:58.156176       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
I0908 05:53:58.156193       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
I0908 05:53:58.156216       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
I0908 05:53:58.156233       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0908 05:53:58.156272       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0908 05:53:58.156287       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0908 05:53:58.156368       1 controllermanager.go:593] Started "persistentvolume-binder"
I0908 05:53:58.156387       1 controllermanager.go:564] Starting "ephemeral-volume"
I0908 05:53:58.156437       1 pv_controller_base.go:311] Starting persistent volume controller
I0908 05:53:58.156452       1 shared_informer.go:255] Waiting for caches to sync for persistent volume
I0908 05:53:58.305157       1 controllermanager.go:593] Started "ephemeral-volume"
... skipping 82 lines ...
I0908 05:54:00.306470       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
I0908 05:54:00.307373       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
I0908 05:54:00.307416       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
I0908 05:54:00.307431       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
I0908 05:54:00.307448       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/fc"
I0908 05:54:00.307556       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
I0908 05:54:00.307721       1 csi_plugin.go:262] Cast from VolumeHost to KubeletVolumeHost failed. Skipping CSINode initialization, not running on kubelet
I0908 05:54:00.307739       1 plugins.go:634] "Loaded volume plugin" pluginName="kubernetes.io/csi"
I0908 05:54:00.307996       1 controllermanager.go:593] Started "attachdetach"
I0908 05:54:00.308017       1 controllermanager.go:564] Starting "clusterrole-aggregation"
I0908 05:54:00.308073       1 attach_detach_controller.go:328] Starting attach detach controller
I0908 05:54:00.308084       1 shared_informer.go:255] Waiting for caches to sync for attach detach
I0908 05:54:00.455694       1 controllermanager.go:593] Started "clusterrole-aggregation"
... skipping 354 lines ...
I0908 05:54:02.337466       1 shared_informer.go:285] caches populated
I0908 05:54:02.337499       1 shared_informer.go:262] Caches are synced for garbage collector
I0908 05:54:02.337513       1 garbagecollector.go:258] synced garbage collector
I0908 05:54:02.394503       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="92.602µs" userAgent="kube-probe/1.24+" audit-ID="" srcIP="127.0.0.1:44548" resp=200
I0908 05:54:07.531783       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-ejsbt5-control-plane-nnz8j}
I0908 05:54:07.531897       1 taint_manager.go:441] "Updating known taints on node" node="capz-ejsbt5-control-plane-nnz8j" taints=[]
I0908 05:54:07.533597       1 topologycache.go:183] Ignoring node capz-ejsbt5-control-plane-nnz8j because it is not ready: [{MemoryPressure False 2022-09-08 05:53:39 +0000 UTC 2022-09-08 05:53:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-08 05:53:39 +0000 UTC 2022-09-08 05:53:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-08 05:53:39 +0000 UTC 2022-09-08 05:53:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-08 05:53:39 +0000 UTC 2022-09-08 05:53:39 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized]}]
I0908 05:54:07.535623       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0908 05:54:07.534470       1 controller.go:697] Ignoring node capz-ejsbt5-control-plane-nnz8j with Ready condition status False
I0908 05:54:07.534493       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-control-plane-nnz8j"
W0908 05:54:07.536105       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-ejsbt5-control-plane-nnz8j" does not exist
I0908 05:54:07.536054       1 controller.go:272] Triggering nodeSync
I0908 05:54:07.536062       1 controller.go:291] nodeSync has been triggered
I0908 05:54:07.537198       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0908 05:54:07.537271       1 controller.go:808] Finished updateLoadBalancerHosts
I0908 05:54:07.537320       1 controller.go:735] It took 0.000160099 seconds to finish nodeSyncInternal
I0908 05:54:07.576955       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-control-plane-nnz8j"
... skipping 92 lines ...
I0908 05:54:08.564679       1 endpointslicemirroring_controller.go:309] kube-system/kube-dns Service now has selector, cleaning up any mirrored EndpointSlices
I0908 05:54:08.564702       1 endpointslicemirroring_controller.go:271] Finished syncing EndpointSlices for "kube-system/kube-dns" Endpoints. (81.799µs)
I0908 05:54:08.565514       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-08 05:54:08.548859181 +0000 UTC m=+19.607814802 - now: 2022-09-08 05:54:08.565503842 +0000 UTC m=+19.624459363]
I0908 05:54:08.565718       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/coredns"
I0908 05:54:08.565884       1 endpoints_controller.go:365] Finished syncing service "kube-system/kube-dns" endpoints. (32.012433ms)
I0908 05:54:08.573542       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="62.032382ms"
I0908 05:54:08.573589       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/coredns" err="Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again"
I0908 05:54:08.573641       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-08 05:54:08.573625574 +0000 UTC m=+19.632581095"
I0908 05:54:08.574692       1 daemon_controller.go:226] Adding daemon set kube-proxy
I0908 05:54:08.575343       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-08 05:54:08 +0000 UTC - now: 2022-09-08 05:54:08.57533526 +0000 UTC m=+19.634290881]
I0908 05:54:08.580195       1 daemon_controller.go:394] ControllerRevision kube-proxy-5b8d9dbc89 added.
I0908 05:54:08.585837       1 controller_utils.go:206] Controller kube-system/kube-proxy either never recorded expectations, or the ttl expired.
I0908 05:54:08.585904       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be7ec022ec1e34, ext:19644855193, loc:(*time.Location)(0x724bda0)}}
... skipping 136 lines ...
I0908 05:54:08.948146       1 disruption.go:430] No matching pdb for pod "metrics-server-7d674f87b8-g9r6p"
I0908 05:54:08.948168       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/metrics-server-7d674f87b8-g9r6p" podUID=843856c0-7800-470c-b05e-65511a811140
I0908 05:54:08.948415       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-7d674f87b8" (41.764451ms)
I0908 05:54:08.948445       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-7d674f87b8", timestamp:time.Time{wall:0xc0be7ec0360b040a, ext:19965647115, loc:(*time.Location)(0x724bda0)}}
I0908 05:54:08.948520       1 replica_set_utils.go:59] Updating status for : kube-system/metrics-server-7d674f87b8, replicas 0->1 (need 1), fullyLabeledReplicas 0->1, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0908 05:54:08.957858       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/metrics-server" duration="59.227305ms"
I0908 05:54:08.957888       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/metrics-server" err="Operation cannot be fulfilled on deployments.apps \"metrics-server\": the object has been modified; please apply your changes to the latest version and try again"
I0908 05:54:08.957919       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/metrics-server" startTime="2022-09-08 05:54:08.957905967 +0000 UTC m=+20.016861488"
I0908 05:54:08.958422       1 deployment_util.go:774] Deployment "metrics-server" timed out (false) [last progress check: 2022-09-08 05:54:08 +0000 UTC - now: 2022-09-08 05:54:08.958412763 +0000 UTC m=+20.017368284]
I0908 05:54:08.959537       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/metrics-server-7d674f87b8"
I0908 05:54:08.963846       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-7d674f87b8" (15.404471ms)
I0908 05:54:08.963878       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/metrics-server-7d674f87b8", timestamp:time.Time{wall:0xc0be7ec0360b040a, ext:19965647115, loc:(*time.Location)(0x724bda0)}}
I0908 05:54:08.963935       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/metrics-server-7d674f87b8" (62.6µs)
... skipping 58 lines ...
I0908 05:54:11.402812       1 disruption.go:427] updatePod called on pod "calico-kube-controllers-7867496574-kq9tc"
I0908 05:54:11.402828       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-kube-controllers-7867496574-kq9tc, PodDisruptionBudget controller will avoid syncing.
I0908 05:54:11.402835       1 disruption.go:430] No matching pdb for pod "calico-kube-controllers-7867496574-kq9tc"
I0908 05:54:11.402856       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/calico-kube-controllers-7867496574-kq9tc" podUID=0ce068e7-7b9a-4501-87e9-53274a1e955d
I0908 05:54:11.402889       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/calico-kube-controllers-7867496574"
I0908 05:54:11.404030       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="52.310789ms"
I0908 05:54:11.404073       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/calico-kube-controllers" err="Operation cannot be fulfilled on deployments.apps \"calico-kube-controllers\": the object has been modified; please apply your changes to the latest version and try again"
I0908 05:54:11.404115       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-08 05:54:11.404102325 +0000 UTC m=+22.463057946"
I0908 05:54:11.404663       1 deployment_util.go:774] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-08 05:54:11 +0000 UTC - now: 2022-09-08 05:54:11.404657033 +0000 UTC m=+22.463612554]
I0908 05:54:11.409917       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/calico-kube-controllers"
I0908 05:54:11.410057       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/calico-kube-controllers" duration="5.941189ms"
I0908 05:54:11.410083       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/calico-kube-controllers" startTime="2022-09-08 05:54:11.410070415 +0000 UTC m=+22.469025936"
I0908 05:54:11.410563       1 deployment_util.go:774] Deployment "calico-kube-controllers" timed out (false) [last progress check: 2022-09-08 05:54:11 +0000 UTC - now: 2022-09-08 05:54:11.410556422 +0000 UTC m=+22.469512043]
... skipping 285 lines ...
I0908 05:54:31.904705       1 tokencleaner.go:166] Finished syncing secret "kube-system/bootstrap-token-iunaxo" (6.8µs)
I0908 05:54:31.910868       1 reflector.go:436] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata closed with: too old resource version: 549 (550)
I0908 05:54:31.965037       1 resource_quota_monitor.go:298] quota monitor not synced: crd.projectcalico.org/v1, Resource=networkpolicies
I0908 05:54:32.064917       1 shared_informer.go:285] caches populated
I0908 05:54:32.064945       1 shared_informer.go:262] Caches are synced for resource quota
I0908 05:54:32.064956       1 resource_quota_controller.go:458] synced quota controller
W0908 05:54:32.378646       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0908 05:54:32.378809       1 garbagecollector.go:215] syncing garbage collector with updated resources from discovery (attempt 1): added: [crd.projectcalico.org/v1, Resource=bgpconfigurations crd.projectcalico.org/v1, Resource=bgppeers crd.projectcalico.org/v1, Resource=blockaffinities crd.projectcalico.org/v1, Resource=caliconodestatuses crd.projectcalico.org/v1, Resource=clusterinformations crd.projectcalico.org/v1, Resource=felixconfigurations crd.projectcalico.org/v1, Resource=globalnetworkpolicies crd.projectcalico.org/v1, Resource=globalnetworksets crd.projectcalico.org/v1, Resource=hostendpoints crd.projectcalico.org/v1, Resource=ipamblocks crd.projectcalico.org/v1, Resource=ipamconfigs crd.projectcalico.org/v1, Resource=ipamhandles crd.projectcalico.org/v1, Resource=ippools crd.projectcalico.org/v1, Resource=ipreservations crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations crd.projectcalico.org/v1, Resource=networkpolicies crd.projectcalico.org/v1, Resource=networksets], removed: []
I0908 05:54:32.378828       1 garbagecollector.go:221] reset restmapper
E0908 05:54:32.384714       1 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0908 05:54:32.404045       1 memcache.go:104] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0908 05:54:32.406300       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=kubecontrollersconfigurations", kind "crd.projectcalico.org/v1, Kind=KubeControllersConfiguration"
I0908 05:54:32.406371       1 graph_builder.go:174] using a shared informer for resource "crd.projectcalico.org/v1, Resource=ipamblocks", kind "crd.projectcalico.org/v1, Kind=IPAMBlock"
... skipping 208 lines ...
I0908 05:54:41.573816       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/coredns" startTime="2022-09-08 05:54:41.573800317 +0000 UTC m=+52.632755838"
I0908 05:54:41.580157       1 deployment_util.go:774] Deployment "coredns" timed out (false) [last progress check: 2022-09-08 05:54:41 +0000 UTC - now: 2022-09-08 05:54:41.580148862 +0000 UTC m=+52.639104383]
I0908 05:54:41.580202       1 progress.go:195] Queueing up deployment "coredns" for a progress check after 599s
I0908 05:54:41.580237       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/coredns" duration="6.425946ms"
I0908 05:54:41.810051       1 gc_controller.go:214] GC'ing orphaned
I0908 05:54:41.810078       1 gc_controller.go:277] GC'ing unscheduled pods which are terminating.
I0908 05:54:41.816244       1 node_lifecycle_controller.go:1040] ReadyCondition for Node capz-ejsbt5-control-plane-nnz8j transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-08 05:54:19 +0000 UTC,LastTransitionTime:2022-09-08 05:53:39 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-08 05:54:39 +0000 UTC,LastTransitionTime:2022-09-08 05:54:39 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0908 05:54:41.816362       1 node_lifecycle_controller.go:1048] Node capz-ejsbt5-control-plane-nnz8j ReadyCondition updated. Updating timestamp.
I0908 05:54:41.816390       1 node_lifecycle_controller.go:894] Node capz-ejsbt5-control-plane-nnz8j is healthy again, removing all taints
I0908 05:54:41.816410       1 node_lifecycle_controller.go:1192] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0908 05:54:42.469139       1 replica_set.go:457] Pod coredns-6d4b75cb6d-dxb7n updated, objectMeta {Name:coredns-6d4b75cb6d-dxb7n GenerateName:coredns-6d4b75cb6d- Namespace:kube-system SelfLink: UID:a704f6ed-a411-4247-a0e2-1807f5d2fd6b ResourceVersion:603 Generation:0 CreationTimestamp:2022-09-08 05:54:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:6d4b75cb6d] Annotations:map[cni.projectcalico.org/containerID:554378f3944eb005c549a03726df273e50ce386f1722369880641e83a3985717 cni.projectcalico.org/podIP:192.168.179.3/32 cni.projectcalico.org/podIPs:192.168.179.3/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-6d4b75cb6d UID:f7d343e5-d37e-4862-adad-59270f79aa07 Controller:0xc000dda757 BlockOwnerDeletion:0xc000dda758}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-08 05:54:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7d343e5-d37e-4862-adad-59270f79aa07\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-08 05:54:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-08 05:54:39 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-08 05:54:40 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status}]} -> {Name:coredns-6d4b75cb6d-dxb7n GenerateName:coredns-6d4b75cb6d- Namespace:kube-system SelfLink: UID:a704f6ed-a411-4247-a0e2-1807f5d2fd6b ResourceVersion:623 Generation:0 CreationTimestamp:2022-09-08 05:54:08 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns pod-template-hash:6d4b75cb6d] Annotations:map[cni.projectcalico.org/containerID:554378f3944eb005c549a03726df273e50ce386f1722369880641e83a3985717 cni.projectcalico.org/podIP:192.168.179.3/32 cni.projectcalico.org/podIPs:192.168.179.3/32] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:coredns-6d4b75cb6d UID:f7d343e5-d37e-4862-adad-59270f79aa07 Controller:0xc001d65eef BlockOwnerDeletion:0xc001d65f30}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-08 05:54:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f7d343e5-d37e-4862-adad-59270f79aa07\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":53,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":53,\"protocol\":\"UDP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":9153,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:name":{},"f:protocol":{}}},"f:readinessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:securityContext":{".":{},"f:allowPrivilegeEscalation":{},"f:capabilities":{".":{},"f:add":{},"f:drop":{}},"f:readOnlyRootFilesystem":{}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/etc/coredns\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"config-volume\"}":{".":{},"f:configMap":{".":{},"f:defaultMode":{},"f:items":{},"f:name":{}},"f:name":{}}}}} Subresource:} {Manager:kube-scheduler Operation:Update APIVersion:v1 Time:2022-09-08 05:54:08 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"PodScheduled\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} Subresource:status} {Manager:Go-http-client Operation:Update APIVersion:v1 Time:2022-09-08 05:54:40 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} Subresource:status} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-08 05:54:42 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.179.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} Subresource:status}]}.
I0908 05:54:42.469328       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/coredns-6d4b75cb6d", timestamp:time.Time{wall:0xc0be7ec020c328ca, ext:19608616395, loc:(*time.Location)(0x724bda0)}}
I0908 05:54:42.469429       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/coredns-6d4b75cb6d" (106.501µs)
... skipping 97 lines ...
I0908 05:55:01.675543       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0908 05:55:01.810629       1 gc_controller.go:214] GC'ing orphaned
I0908 05:55:01.810658       1 gc_controller.go:277] GC'ing unscheduled pods which are terminating.
I0908 05:55:01.859440       1 pv_controller_base.go:605] resyncing PV controller
E0908 05:55:02.083875       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0908 05:55:02.083938       1 resource_quota_controller.go:428] no resource updates from discovery, skipping resource quota sync
W0908 05:55:03.033190       1 garbagecollector.go:747] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0908 05:55:09.811110       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="89.502µs" userAgent="kube-probe/1.24+" audit-ID="" srcIP="127.0.0.1:43528" resp=200
I0908 05:55:10.573533       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-control-plane-nnz8j"
I0908 05:55:11.819363       1 node_lifecycle_controller.go:1048] Node capz-ejsbt5-control-plane-nnz8j ReadyCondition updated. Updating timestamp.
I0908 05:55:16.620605       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0908 05:55:16.859912       1 pv_controller_base.go:605] resyncing PV controller
I0908 05:55:19.811472       1 httplog.go:131] "HTTP" verb="GET" URI="/healthz" latency="78.301µs" userAgent="kube-probe/1.24+" audit-ID="" srcIP="127.0.0.1:48934" resp=200
... skipping 60 lines ...
I0908 05:55:34.237780       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be7ec207eb18b6, ext:27191803419, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:34.238159       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be7ed58e319de1, ext:105297088326, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:34.238325       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-ejsbt5-md-0-2rbjw], creating 1
I0908 05:55:34.240027       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-ejsbt5-md-0-2rbjw}
I0908 05:55:34.242012       1 taint_manager.go:441] "Updating known taints on node" node="capz-ejsbt5-md-0-2rbjw" taints=[]
I0908 05:55:34.242210       1 topologycache.go:179] Ignoring node capz-ejsbt5-control-plane-nnz8j because it has an excluded label
I0908 05:55:34.242316       1 topologycache.go:183] Ignoring node capz-ejsbt5-md-0-2rbjw because it is not ready: [{MemoryPressure False 2022-09-08 05:55:34 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-08 05:55:34 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-08 05:55:34 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-08 05:55:34 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletNotReady [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, CSINode is not yet initialized, missing node capacity for resources: ephemeral-storage]}]
I0908 05:55:34.242468       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0908 05:55:34.242586       1 controller.go:697] Ignoring node capz-ejsbt5-md-0-2rbjw with Ready condition status False
I0908 05:55:34.242626       1 controller.go:272] Triggering nodeSync
I0908 05:55:34.242699       1 controller.go:291] nodeSync has been triggered
I0908 05:55:34.242809       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0908 05:55:34.242911       1 controller.go:808] Finished updateLoadBalancerHosts
I0908 05:55:34.243012       1 controller.go:735] It took 0.000204201 seconds to finish nodeSyncInternal
I0908 05:55:34.243118       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-md-0-2rbjw"
W0908 05:55:34.243236       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-ejsbt5-md-0-2rbjw" does not exist
I0908 05:55:34.249736       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be7ec8e97963cf, ext:54754776884, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:34.249975       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be7ed58ee63772, ext:105308924119, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:34.250002       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-ejsbt5-md-0-2rbjw], creating 1
I0908 05:55:34.271414       1 ttl_controller.go:276] "Changed ttl annotation" node="capz-ejsbt5-md-0-2rbjw" new_ttl="0s"
I0908 05:55:34.273114       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-md-0-2rbjw"
I0908 05:55:34.278776       1 daemon_controller.go:513] Pod kube-proxy-lkkdx added.
... skipping 270 lines ...
I0908 05:55:55.551101       1 disruption.go:490] No PodDisruptionBudgets found for pod calico-node-clwlz, PodDisruptionBudget controller will avoid syncing.
I0908 05:55:55.551278       1 disruption.go:430] No matching pdb for pod "calico-node-clwlz"
I0908 05:55:55.551059       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/calico-node" (2.770734ms)
I0908 05:55:58.193294       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-ejsbt5-md-0-rfbqm}
I0908 05:55:58.193323       1 taint_manager.go:441] "Updating known taints on node" node="capz-ejsbt5-md-0-rfbqm" taints=[]
I0908 05:55:58.193588       1 topologycache.go:179] Ignoring node capz-ejsbt5-control-plane-nnz8j because it has an excluded label
I0908 05:55:58.194294       1 topologycache.go:183] Ignoring node capz-ejsbt5-md-0-2rbjw because it is not ready: [{MemoryPressure False 2022-09-08 05:55:44 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-08 05:55:44 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-08 05:55:44 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-08 05:55:44 +0000 UTC 2022-09-08 05:55:34 +0000 UTC KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}]
I0908 05:55:58.194331       1 topologycache.go:183] Ignoring node capz-ejsbt5-md-0-rfbqm because it is not ready: [{MemoryPressure False 2022-09-08 05:55:57 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-08 05:55:57 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-08 05:55:57 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-08 05:55:57 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-ejsbt5-md-0-rfbqm" not found]}]
I0908 05:55:58.194355       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, true)
I0908 05:55:58.194146       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be7ed79f55db94, ext:113584676089, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:58.194436       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/kube-proxy", timestamp:time.Time{wall:0xc0be7edb8b96c8f9, ext:129253386846, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:58.194453       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set kube-proxy: [capz-ejsbt5-md-0-rfbqm], creating 1
I0908 05:55:58.194261       1 controller.go:697] Ignoring node capz-ejsbt5-md-0-2rbjw with Ready condition status False
I0908 05:55:58.194795       1 controller.go:697] Ignoring node capz-ejsbt5-md-0-rfbqm with Ready condition status False
I0908 05:55:58.194808       1 controller.go:272] Triggering nodeSync
I0908 05:55:58.194816       1 controller.go:291] nodeSync has been triggered
I0908 05:55:58.194823       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0908 05:55:58.194833       1 controller.go:808] Finished updateLoadBalancerHosts
I0908 05:55:58.194840       1 controller.go:735] It took 1.75e-05 seconds to finish nodeSyncInternal
I0908 05:55:58.194277       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-md-0-rfbqm"
W0908 05:55:58.194859       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="capz-ejsbt5-md-0-rfbqm" does not exist
I0908 05:55:58.196285       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be7edae0cefda1, ext:126609391778, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:58.196383       1 controller_utils.go:223] Setting expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/calico-node", timestamp:time.Time{wall:0xc0be7edb8bb4845d, ext:129255335262, loc:(*time.Location)(0x724bda0)}}
I0908 05:55:58.196402       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [capz-ejsbt5-md-0-rfbqm], creating 1
I0908 05:55:58.213407       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-md-0-rfbqm"
I0908 05:55:58.217462       1 controller_utils.go:581] Controller kube-proxy created pod kube-proxy-9ljrj
I0908 05:55:58.235536       1 controller_utils.go:581] Controller calico-node created pod calico-node-sj2ht
... skipping 179 lines ...
I0908 05:56:03.585718       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/calico-node" (2.50333ms)
I0908 05:56:04.877737       1 controller_utils.go:205] "Added taint to node" taint=[] node="capz-ejsbt5-md-0-2rbjw"
I0908 05:56:04.878168       1 controller.go:697] Ignoring node capz-ejsbt5-md-0-rfbqm with Ready condition status False
I0908 05:56:04.878768       1 controller.go:265] Node changes detected, triggering a full node sync on all loadbalancer services
I0908 05:56:04.879052       1 controller.go:272] Triggering nodeSync
I0908 05:56:04.878209       1 topologycache.go:179] Ignoring node capz-ejsbt5-control-plane-nnz8j because it has an excluded label
I0908 05:56:04.879433       1 topologycache.go:183] Ignoring node capz-ejsbt5-md-0-rfbqm because it is not ready: [{MemoryPressure False 2022-09-08 05:55:58 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2022-09-08 05:55:58 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2022-09-08 05:55:58 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready False 2022-09-08 05:55:58 +0000 UTC 2022-09-08 05:55:57 +0000 UTC KubeletNotReady [container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "capz-ejsbt5-md-0-rfbqm" not found]}]
I0908 05:56:04.878224       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-md-0-2rbjw"
I0908 05:56:04.879293       1 controller.go:291] nodeSync has been triggered
I0908 05:56:04.879896       1 controller.go:757] Syncing backends for all LB services.
I0908 05:56:04.880016       1 controller.go:792] Running updateLoadBalancerHosts(len(services)==0, workers==1)
I0908 05:56:04.880172       1 controller.go:808] Finished updateLoadBalancerHosts
I0908 05:56:04.880293       1 controller.go:764] Successfully updated 0 out of 0 load balancers to direct traffic to the updated set of nodes
... skipping 2 lines ...
I0908 05:56:04.893377       1 attach_detach_controller.go:673] processVolumesInUse for node "capz-ejsbt5-md-0-2rbjw"
I0908 05:56:04.893746       1 controller_utils.go:217] "Made sure that node has no taint" node="capz-ejsbt5-md-0-2rbjw" taint=[&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,}]
I0908 05:56:04.910600       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ejsbt5/providers/Microsoft.Compute/virtualMachines/capz-ejsbt5-md-0-rfbqm), assuming it is managed by availability set: not a vmss instance
I0908 05:56:04.910704       1 azure_vmss.go:370] Can not extract scale set name from providerID (azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/capz-ejsbt5/providers/Microsoft.Compute/virtualMachines/capz-ejsbt5-md-0-rfbqm), assuming it is managed by availability set: not a vmss instance
I0908 05:56:04.910763       1 azure_instances.go:240] InstanceShutdownByProviderID gets power status "running" for node "capz-ejsbt5-md-0-rfbqm"
I0908 05:56:04.910780       1 azure_instances.go:251] InstanceShutdownByProviderID gets provisioning state "Updating" for node "capz-ejsbt5-md-0-rfbqm"
I0908 05:56:06.827561       1 node_lifecycle_controller.go:1040] ReadyCondition for Node capz-ejsbt5-md-0-2rbjw transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-08 05:55:44 +0000 UTC,LastTransitionTime:2022-09-08 05:55:34 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-08 05:56:04 +0000 UTC,LastTransitionTime:2022-09-08 05:56:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0908 05:56:06.827652       1 node_lifecycle_controller.go:1048] Node capz-ejsbt5-md-0-2rbjw ReadyCondition updated. Updating timestamp.
I0908 05:56:06.841132       1 node_lifecycle_controller.go:894] Node capz-ejsbt5-md-0-2rbjw is healthy again, removing all taints
I0908 05:56:06.841941       1 node_lifecycle_controller.go:1215] Controller detected that zone uksouth::0 is now in state Normal.
I0908 05:56:06.841847       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-ejsbt5-md-0-2rbjw}
I0908 05:56:06.842204       1 taint_manager.go:441] "Updating known taints on node" node="capz-ejsbt5-md-0-2rbjw" taints=[]
I0908 05:56:06.842308       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-ejsbt5-md-0-2rbjw"
... skipping 163 lines ...
I0908 05:56:30.262951       1 daemon_controller.go:967] Nodes needing daemon pods for daemon set calico-node: [], creating 0
I0908 05:56:30.263059       1 daemon_controller.go:1029] Pods to delete for daemon set calico-node: [], deleting 0
I0908 05:56:30.263132       1 daemon_controller.go:1112] Updating daemon set status
I0908 05:56:30.263387       1 daemon_controller.go:1172] Finished syncing daemon set "kube-system/calico-node" (2.946238ms)
I0908 05:56:31.624294       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0908 05:56:31.678105       1 reflector.go:382] vendor/k8s.io/client-go/informers/factory.go:134: forcing resync
I0908 05:56:31.846010       1 node_lifecycle_controller.go:1040] ReadyCondition for Node capz-ejsbt5-md-0-rfbqm transitioned from &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:2022-09-08 05:56:18 +0000 UTC,LastTransitionTime:2022-09-08 05:55:57 +0000 UTC,Reason:KubeletNotReady,Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized,} to &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-09-08 05:56:29 +0000 UTC,LastTransitionTime:2022-09-08 05:56:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,}
I0908 05:56:31.846075       1 node_lifecycle_controller.go:1048] Node capz-ejsbt5-md-0-rfbqm ReadyCondition updated. Updating timestamp.
I0908 05:56:31.859290       1 node_lifecycle_controller.go:894] Node capz-ejsbt5-md-0-rfbqm is healthy again, removing all taints
I0908 05:56:31.861183       1 taint_manager.go:436] "Noticed node update" node={nodeName:capz-ejsbt5-md-0-rfbqm}
I0908 05:56:31.861208       1 taint_manager.go:441] "Updating known taints on node" node="capz-ejsbt5-md-0-rfbqm" taints=[]
I0908 05:56:31.861224       1 taint_manager.go:462] "All taints were removed from the node. Cancelling all evictions..." node="capz-ejsbt5-md-0-rfbqm"
I0908 05:56:31.861696       1 pv_controller_base.go:605] resyncing PV controller
... skipping 14 lines ...
I0908 05:56:37.634399       1 event.go:294] "Event occurred" object="kube-system/csi-azurefile-controller-78f78cfdd5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: csi-azurefile-controller-78f78cfdd5-2zg8g"
I0908 05:56:37.636643       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g"
I0908 05:56:37.636751       1 disruption.go:415] addPod called on pod "csi-azurefile-controller-78f78cfdd5-2zg8g"
I0908 05:56:37.636834       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-azurefile-controller-78f78cfdd5-2zg8g, PodDisruptionBudget controller will avoid syncing.
I0908 05:56:37.636920       1 disruption.go:418] No matching pdb for pod "csi-azurefile-controller-78f78cfdd5-2zg8g"
I0908 05:56:37.637010       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g" podUID=b8f6e917-c992-4277-8bd4-da79cb368f57
I0908 05:56:37.637102       1 replica_set.go:394] Pod csi-azurefile-controller-78f78cfdd5-2zg8g created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-78f78cfdd5-2zg8g", GenerateName:"csi-azurefile-controller-78f78cfdd5-", Namespace:"kube-system", SelfLink:"", UID:"b8f6e917-c992-4277-8bd4-da79cb368f57", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2022, time.September, 8, 5, 56, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"78f78cfdd5"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-78f78cfdd5", UID:"6be0ab7d-63b0-4e73-b96d-f6a13c6a9d26", Controller:(*bool)(0xc002a43557), BlockOwnerDeletion:(*bool)(0xc002a43558)}}, Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 8, 5, 56, 37, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0025ed488), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc0025ed4a0), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0025ed4b8), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-mb77z", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002243900), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mb77z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mb77z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mb77z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mb77z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mb77z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc002243a20)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mb77z", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc0029bd640), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a43970), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003ade30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a439f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a43a10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002a43a18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a43a1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002c70e50), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0908 05:56:37.638177       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"kube-system/csi-azurefile-controller-78f78cfdd5", timestamp:time.Time{wall:0xc0be7ee564cbc12c, ext:168676288657, loc:(*time.Location)(0x724bda0)}}
I0908 05:56:37.649191       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-azurefile-controller" duration="44.498171ms"
I0908 05:56:37.649310       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-azurefile-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-azurefile-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0908 05:56:37.649402       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-azurefile-controller" startTime="2022-09-08 05:56:37.649384947 +0000 UTC m=+168.708340468"
I0908 05:56:37.651239       1 replica_set.go:394] Pod csi-azurefile-controller-78f78cfdd5-5s64m created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"csi-azurefile-controller-78f78cfdd5-5s64m", GenerateName:"csi-azurefile-controller-78f78cfdd5-", Namespace:"kube-system", SelfLink:"", UID:"958a6168-f650-4f7d-bafa-ef05d9c788bb", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2022, time.September, 8, 5, 56, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"csi-azurefile-controller", "pod-template-hash":"78f78cfdd5"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"csi-azurefile-controller-78f78cfdd5", UID:"6be0ab7d-63b0-4e73-b96d-f6a13c6a9d26", Controller:(*bool)(0xc00255df27), BlockOwnerDeletion:(*bool)(0xc00255df28)}}, Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 8, 5, 56, 37, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00274c5e8), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"socket-dir", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(0xc00274c630), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"azure-cred", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00274c678), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-mzxmx", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0027fc0a0), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"csi-provisioner", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner:v3.2.0", Command:[]string(nil), Args:[]string{"-v=2", "--csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system", "--timeout=300s", "--extra-create-metadata=true"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mzxmx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-attacher", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-attacher:v3.5.0", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "-timeout=120s", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mzxmx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-snapshotter", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-snapshotter:v5.0.1", Command:[]string(nil), Args:[]string{"-v=2", "-csi-address=$(ADDRESS)", "--leader-election", "--leader-election-namespace=kube-system"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mzxmx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"csi-resizer", Image:"mcr.microsoft.com/oss/kubernetes-csi/csi-resizer:v1.5.0", Command:[]string(nil), Args:[]string{"-csi-address=$(ADDRESS)", "-v=2", "--leader-election", "--leader-election-namespace=kube-system", "-handle-volume-inuse-error=false", "-feature-gates=RecoverVolumeExpansionFailure=true", "-timeout=120s"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"ADDRESS", Value:"/csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:524288000, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"500Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mzxmx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"liveness-probe", Image:"mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.7.0", Command:[]string(nil), Args:[]string{"--csi-address=/csi/csi.sock", "--probe-timeout=3s", "--health-port=29612", "--v=2"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:104857600, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mzxmx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"azurefile", Image:"mcr.microsoft.com/k8s/csi/azurefile-csi:latest", Command:[]string(nil), Args:[]string{"--v=5", "--endpoint=$(CSI_ENDPOINT)", "--metrics-address=0.0.0.0:29614", "--user-agent-suffix=OSS-kubectl"}, WorkingDir:"", Ports:[]v1.ContainerPort{v1.ContainerPort{Name:"healthz", HostPort:29612, ContainerPort:29612, Protocol:"TCP", HostIP:""}, v1.ContainerPort{Name:"metrics", HostPort:29614, ContainerPort:29614, Protocol:"TCP", HostIP:""}}, EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"AZURE_CREDENTIAL_FILE", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0027fc1c0)}, v1.EnvVar{Name:"CSI_ENDPOINT", Value:"unix:///csi/csi.sock", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:209715200, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:10, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"10m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:20971520, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"20Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"socket-dir", ReadOnly:false, MountPath:"/csi", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"azure-cred", ReadOnly:false, MountPath:"/etc/kubernetes/", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mzxmx", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(0xc002ceae00), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024742d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"csi-azurefile-controller-sa", DeprecatedServiceAccount:"csi-azurefile-controller-sa", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003c8f50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node-role.kubernetes.io/master", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/controlplane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node-role.kubernetes.io/control-plane", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002474340)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002474360)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-cluster-critical", Priority:(*int32)(0xc002474368), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00247436c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc002d84420), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0908 05:56:37.652973       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-azurefile-controller-78f78cfdd5", timestamp:time.Time{wall:0xc0be7ee564cbc12c, ext:168676288657, loc:(*time.Location)(0x724bda0)}}
I0908 05:56:37.654035       1 replica_set.go:457] Pod csi-azurefile-controller-78f78cfdd5-2zg8g updated, objectMeta {Name:csi-azurefile-controller-78f78cfdd5-2zg8g GenerateName:csi-azurefile-controller-78f78cfdd5- Namespace:kube-system SelfLink: UID:b8f6e917-c992-4277-8bd4-da79cb368f57 ResourceVersion:960 Generation:0 CreationTimestamp:2022-09-08 05:56:37 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:78f78cfdd5] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-78f78cfdd5 UID:6be0ab7d-63b0-4e73-b96d-f6a13c6a9d26 Controller:0xc002a43557 BlockOwnerDeletion:0xc002a43558}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-08 05:56:37 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6be0ab7d-63b0-4e73-b96d-f6a13c6a9d26\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]} -> {Name:csi-azurefile-controller-78f78cfdd5-2zg8g GenerateName:csi-azurefile-controller-78f78cfdd5- Namespace:kube-system SelfLink: UID:b8f6e917-c992-4277-8bd4-da79cb368f57 ResourceVersion:963 Generation:0 CreationTimestamp:2022-09-08 05:56:37 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-azurefile-controller pod-template-hash:78f78cfdd5] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-azurefile-controller-78f78cfdd5 UID:6be0ab7d-63b0-4e73-b96d-f6a13c6a9d26 Controller:0xc0024757a7 BlockOwnerDeletion:0xc0024757a8}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-08 05:56:37 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6be0ab7d-63b0-4e73-b96d-f6a13c6a9d26\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"azurefile\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"AZURE_CREDENTIAL_FILE\"}":{".":{},"f:name":{},"f:valueFrom":{".":{},"f:configMapKeyRef":{}}},"k:{\"name\":\"CSI_ENDPOINT\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:livenessProbe":{".":{},"f:failureThreshold":{},"f:httpGet":{".":{},"f:path":{},"f:port":{},"f:scheme":{}},"f:initialDelaySeconds":{},"f:periodSeconds":{},"f:successThreshold":{},"f:timeoutSeconds":{}},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":29612,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}},"k:{\"containerPort\":29614,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:hostPort":{},"f:name":{},"f:protocol":{}}},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}},"k:{\"mountPath\":\"/etc/kubernetes/\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-attacher\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-provisioner\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-resizer\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"csi-snapshotter\"}":{".":{},"f:args":{},"f:env":{".":{},"k:{\"name\":\"ADDRESS\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}},"k:{\"name\":\"liveness-probe\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/csi\"}":{".":{},"f:mountPath":{},"f:name":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:hostNetwork":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{},"f:volumes":{".":{},"k:{\"name\":\"azure-cred\"}":{".":{},"f:hostPath":{".":{},"f:path":{},"f:type":{}},"f:name":{}},"k:{\"name\":\"socket-dir\"}":{".":{},"f:emptyDir":{},"f:name":{}}}}} Subresource:}]}.
I0908 05:56:37.654658       1 deployment_util.go:774] Deployment "csi-azurefile-controller" timed out (false) [last progress check: 2022-09-08 05:56:37 +0000 UTC - now: 2022-09-08 05:56:37.654650215 +0000 UTC m=+168.713605736]
I0908 05:56:37.653208       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-78f78cfdd5-5s64m"
I0908 05:56:37.653265       1 pvc_protection_controller.go:380] "Enqueuing PVCs for Pod" pod="kube-system/csi-azurefile-controller-78f78cfdd5-5s64m" podUID=958a6168-f650-4f7d-bafa-ef05d9c788bb
I0908 05:56:37.653686       1 taint_manager.go:401] "Noticed pod update" pod="kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g"
... skipping 203 lines ...
I0908 05:56:47.621198       1 disruption.go:430] No matching pdb for pod "csi-snapshot-controller-8545756757-k467p"
I0908 05:56:47.621479       1 replica_set.go:457] Pod csi-snapshot-controller-8545756757-2drnc updated, objectMeta {Name:csi-snapshot-controller-8545756757-2drnc GenerateName:csi-snapshot-controller-8545756757- Namespace:kube-system SelfLink: UID:cc6346df-425a-41d1-9125-e85cf15307b8 ResourceVersion:1087 Generation:0 CreationTimestamp:2022-09-08 05:56:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:8545756757] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-8545756757 UID:c4c5627b-e84f-43e0-9c40-37e0d1f594c5 Controller:0xc000ca68d7 BlockOwnerDeletion:0xc000ca68d8}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-08 05:56:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4c5627b-e84f-43e0-9c40-37e0d1f594c5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:}]} -> {Name:csi-snapshot-controller-8545756757-2drnc GenerateName:csi-snapshot-controller-8545756757- Namespace:kube-system SelfLink: UID:cc6346df-425a-41d1-9125-e85cf15307b8 ResourceVersion:1094 Generation:0 CreationTimestamp:2022-09-08 05:56:47 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app:csi-snapshot-controller pod-template-hash:8545756757] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:csi-snapshot-controller-8545756757 UID:c4c5627b-e84f-43e0-9c40-37e0d1f594c5 Controller:0xc000ec617e BlockOwnerDeletion:0xc000ec617f}] Finalizers:[] ZZZ_DeprecatedClusterName: ManagedFields:[{Manager:kube-controller-manager Operation:Update APIVersion:v1 Time:2022-09-08 05:56:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c4c5627b-e84f-43e0-9c40-37e0d1f594c5\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"csi-snapshot-controller\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:nodeSelector":{},"f:priorityClassName":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:serviceAccount":{},"f:serviceAccountName":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}} Subresource:} {Manager:kubelet Operation:Update APIVersion:v1 Time:2022-09-08 05:56:47 +0000 UTC FieldsType:FieldsV1 FieldsV1:{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} Subresource:status}]}.
I0908 05:56:47.621629       1 disruption.go:427] updatePod called on pod "csi-snapshot-controller-8545756757-2drnc"
I0908 05:56:47.621655       1 disruption.go:490] No PodDisruptionBudgets found for pod csi-snapshot-controller-8545756757-2drnc, PodDisruptionBudget controller will avoid syncing.
I0908 05:56:47.621663       1 disruption.go:430] No matching pdb for pod "csi-snapshot-controller-8545756757-2drnc"
I0908 05:56:47.624969       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="86.084768ms"
I0908 05:56:47.624999       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0908 05:56:47.625029       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-08 05:56:47.625015661 +0000 UTC m=+178.683971182"
I0908 05:56:47.625467       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-08 05:56:47 +0000 UTC - now: 2022-09-08 05:56:47.625460366 +0000 UTC m=+178.684415887]
I0908 05:56:47.641906       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="16.871209ms"
I0908 05:56:47.642043       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-08 05:56:47.641931271 +0000 UTC m=+178.700886892"
I0908 05:56:47.642336       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
I0908 05:56:47.642662       1 replica_set_utils.go:59] Updating status for : kube-system/csi-snapshot-controller-8545756757, replicas 0->2 (need 2), fullyLabeledReplicas 0->2, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I0908 05:56:47.643201       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-08 05:56:47 +0000 UTC - now: 2022-09-08 05:56:47.643192986 +0000 UTC m=+178.702148607]
I0908 05:56:47.647930       1 deployment_controller.go:281] "ReplicaSet updated" replicaSet="kube-system/csi-snapshot-controller-8545756757"
I0908 05:56:47.649761       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-snapshot-controller-8545756757" (30.57358ms)
I0908 05:56:47.649799       1 controller_utils.go:189] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"kube-system/csi-snapshot-controller-8545756757", timestamp:time.Time{wall:0xc0be7ee7e11a6291, ext:178614332818, loc:(*time.Location)(0x724bda0)}}
I0908 05:56:47.649908       1 replica_set.go:667] Finished syncing ReplicaSet "kube-system/csi-snapshot-controller-8545756757" (99.501µs)
I0908 05:56:47.652090       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="10.146525ms"
I0908 05:56:47.652122       1 deployment_controller.go:490] "Error syncing deployment" deployment="kube-system/csi-snapshot-controller" err="Operation cannot be fulfilled on deployments.apps \"csi-snapshot-controller\": the object has been modified; please apply your changes to the latest version and try again"
I0908 05:56:47.652161       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-08 05:56:47.652148897 +0000 UTC m=+178.711104518"
I0908 05:56:47.656816       1 deployment_controller.go:578] "Finished syncing deployment" deployment="kube-system/csi-snapshot-controller" duration="4.654658ms"
I0908 05:56:47.656851       1 deployment_controller.go:176] "Updating deployment" deployment="kube-system/csi-snapshot-controller"
I0908 05:56:47.656914       1 deployment_controller.go:576] "Started syncing deployment" deployment="kube-system/csi-snapshot-controller" startTime="2022-09-08 05:56:47.656899756 +0000 UTC m=+178.715855277"
I0908 05:56:47.657245       1 deployment_util.go:774] Deployment "csi-snapshot-controller" timed out (false) [last progress check: 2022-09-08 05:56:47 +0000 UTC - now: 2022-09-08 05:56:47.65723926 +0000 UTC m=+178.716194881]
I0908 05:56:47.657284       1 progress.go:195] Queueing up deployment "csi-snapshot-controller" for a progress check after 599s
... skipping 1529 lines ...
I0908 06:03:08.967668       1 replica_set.go:577] "Too few replicas" replicaSet="azurefile-5356/azurefile-volume-tester-vc9xt-557d85bfb8" need=1 creating=1
I0908 06:03:08.967379       1 event.go:294] "Event occurred" object="azurefile-5356/azurefile-volume-tester-vc9xt" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set azurefile-volume-tester-vc9xt-557d85bfb8 to 1"
I0908 06:03:08.967401       1 deployment_controller.go:215] "ReplicaSet added" replicaSet="azurefile-5356/azurefile-volume-tester-vc9xt-557d85bfb8"
I0908 06:03:08.972445       1 deployment_controller.go:176] "Updating deployment" deployment="azurefile-5356/azurefile-volume-tester-vc9xt"
I0908 06:03:08.972833       1 deployment_util.go:774] Deployment "azurefile-volume-tester-vc9xt" timed out (false) [last progress check: 2022-09-08 06:03:08.967014646 +0000 UTC m=+560.025970167 - now: 2022-09-08 06:03:08.972827619 +0000 UTC m=+560.031783140]
I0908 06:03:08.982200       1 deployment_controller.go:578] "Finished syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-vc9xt" duration="19.697346ms"
I0908 06:03:08.982234       1 deployment_controller.go:490] "Error syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-vc9xt" err="Operation cannot be fulfilled on deployments.apps \"azurefile-volume-tester-vc9xt\": the object has been modified; please apply your changes to the latest version and try again"
I0908 06:03:08.982262       1 deployment_controller.go:576] "Started syncing deployment" deployment="azurefile-5356/azurefile-volume-tester-vc9xt" startTime="2022-09-08 06:03:08.982249636 +0000 UTC m=+560.041205157"
I0908 06:03:08.983871       1 replica_set.go:394] Pod azurefile-volume-tester-vc9xt-557d85bfb8-ndvgs created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"azurefile-volume-tester-vc9xt-557d85bfb8-ndvgs", GenerateName:"azurefile-volume-tester-vc9xt-557d85bfb8-", Namespace:"azurefile-5356", SelfLink:"", UID:"7f986cd9-0f87-42ff-be7f-3a88ae1a48c5", ResourceVersion:"2319", Generation:0, CreationTimestamp:time.Date(2022, time.September, 8, 6, 3, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"azurefile-volume-tester-5018949295715050020", "pod-template-hash":"557d85bfb8"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"azurefile-volume-tester-vc9xt-557d85bfb8", UID:"945bcacb-0c54-4346-a841-d31e62a3ae49", Controller:(*bool)(0xc002660d17), BlockOwnerDeletion:(*bool)(0xc002660d18)}}, Finalizers:[]string(nil), ZZZ_DeprecatedClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.September, 8, 6, 3, 8, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0029360c0), Subresource:""}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"test-volume-1", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(0xc0029360d8), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"kube-api-access-mf5hm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc002fd8b60), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"volume-tester", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/sh"}, Args:[]string{"-c", "echo 'hello world' >> /mnt/test-1/data && while true; do sleep 100; done"}, WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"test-volume-1", ReadOnly:false, MountPath:"/mnt/test-1", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"kube-api-access-mf5hm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002660de8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003c8310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002660e20)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002660e40)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002660e48), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002660e4c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc001f4bb70), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I0908 06:03:08.984509       1 deployment_util.go:774] Deployment "azurefile-volume-tester-vc9xt" timed out (false) [last progress check: 2022-09-08 06:03:08 +0000 UTC - now: 2022-09-08 06:03:08.984501365 +0000 UTC m=+560.043456886]
I0908 06:03:08.984135       1 controller_utils.go:240] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"azurefile-5356/azurefile-volume-tester-vc9xt-557d85bfb8", timestamp:time.Time{wall:0xc0be7f4739aadb38, ext:560026454173, loc:(*time.Location)(0x724bda0)}}
I0908 06:03:08.985230       1 taint_manager.go:401] "Noticed pod update" pod="azurefile-5356/azurefile-volume-tester-vc9xt-557d85bfb8-ndvgs"
I0908 06:03:08.985289       1 disruption.go:415] addPod called on pod "azurefile-volume-tester-vc9xt-557d85bfb8-ndvgs"
... skipping 1396 lines ...
I0908 06:06:21.834138       1 gc_controller.go:277] GC'ing unscheduled pods which are terminating.
2022/09/08 06:06:22 ===================================================

JUnit report was created: /logs/artifacts/junit_01.xml

Ran 6 of 34 Specs in 362.598 seconds
SUCCESS! -- 6 Passed | 0 Failed | 0 Pending | 28 Skipped

You're using deprecated Ginkgo functionality:
=============================================
Ginkgo 2.0 is under active development and will introduce several new features, improvements, and a small handful of breaking changes.
A release candidate for 2.0 is now available and 2.0 should GA in Fall 2021.  Please give the RC a try and send us feedback!
  - To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/ver2/docs/MIGRATING_TO_V2.md
... skipping 44 lines ...
INFO: Creating log watcher for controller capz-system/capz-controller-manager, pod capz-controller-manager-858df9cd95-pkhpb, container manager
STEP: Dumping workload cluster default/capz-ejsbt5 logs
Sep  8 06:08:03.011: INFO: Collecting logs for Linux node capz-ejsbt5-control-plane-nnz8j in cluster capz-ejsbt5 in namespace default

Sep  8 06:09:03.013: INFO: Collecting boot logs for AzureMachine capz-ejsbt5-control-plane-nnz8j

Failed to get logs for machine capz-ejsbt5-control-plane-6b4rb, cluster default/capz-ejsbt5: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  8 06:09:04.399: INFO: Collecting logs for Linux node capz-ejsbt5-md-0-rfbqm in cluster capz-ejsbt5 in namespace default

Sep  8 06:10:04.401: INFO: Collecting boot logs for AzureMachine capz-ejsbt5-md-0-rfbqm

Failed to get logs for machine capz-ejsbt5-md-0-749dbf84d8-bmgzq, cluster default/capz-ejsbt5: open /etc/azure-ssh/azure-ssh: no such file or directory
Sep  8 06:10:05.143: INFO: Collecting logs for Linux node capz-ejsbt5-md-0-2rbjw in cluster capz-ejsbt5 in namespace default

Sep  8 06:11:05.145: INFO: Collecting boot logs for AzureMachine capz-ejsbt5-md-0-2rbjw

Failed to get logs for machine capz-ejsbt5-md-0-749dbf84d8-n64tt, cluster default/capz-ejsbt5: open /etc/azure-ssh/azure-ssh: no such file or directory
STEP: Dumping workload cluster default/capz-ejsbt5 kube-system pod logs
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g, container csi-provisioner
STEP: Fetching kube-system pod logs took 1.031640161s
STEP: Dumping workload cluster default/capz-ejsbt5 Azure activity log
STEP: Creating log watcher for controller kube-system/calico-kube-controllers-7867496574-kq9tc, container calico-kube-controllers
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-8545756757-2drnc
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-8545756757-k467p, container csi-snapshot-controller
STEP: Collecting events for Pod kube-system/csi-snapshot-controller-8545756757-k467p
STEP: Collecting events for Pod kube-system/kube-controller-manager-capz-ejsbt5-control-plane-nnz8j
STEP: Collecting events for Pod kube-system/kube-apiserver-capz-ejsbt5-control-plane-nnz8j
STEP: Creating log watcher for controller kube-system/kube-proxy-9p699, container kube-proxy
STEP: Collecting events for Pod kube-system/calico-node-sj2ht
STEP: Creating log watcher for controller kube-system/kube-controller-manager-capz-ejsbt5-control-plane-nnz8j, container kube-controller-manager
STEP: failed to find events of Pod "kube-apiserver-capz-ejsbt5-control-plane-nnz8j"
STEP: Creating log watcher for controller kube-system/etcd-capz-ejsbt5-control-plane-nnz8j, container etcd
STEP: Collecting events for Pod kube-system/etcd-capz-ejsbt5-control-plane-nnz8j
STEP: Creating log watcher for controller kube-system/coredns-6d4b75cb6d-cp7j9, container coredns
STEP: Creating log watcher for controller kube-system/calico-node-clwlz, container calico-node
STEP: failed to find events of Pod "etcd-capz-ejsbt5-control-plane-nnz8j"
STEP: Collecting events for Pod kube-system/calico-kube-controllers-7867496574-kq9tc
STEP: Creating log watcher for controller kube-system/kube-proxy-9ljrj, container kube-proxy
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g, container csi-snapshotter
STEP: Collecting events for Pod kube-system/kube-proxy-9p699
STEP: Collecting events for Pod kube-system/kube-proxy-9ljrj
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g, container csi-resizer
... skipping 22 lines ...
STEP: Creating log watcher for controller kube-system/csi-snapshot-controller-8545756757-2drnc, container csi-snapshot-controller
STEP: Creating log watcher for controller kube-system/coredns-6d4b75cb6d-dxb7n, container coredns
STEP: Collecting events for Pod kube-system/calico-node-clwlz
STEP: Creating log watcher for controller kube-system/csi-azurefile-controller-78f78cfdd5-2zg8g, container csi-attacher
STEP: Collecting events for Pod kube-system/kube-proxy-lkkdx
STEP: Collecting events for Pod kube-system/coredns-6d4b75cb6d-dxb7n
STEP: failed to find events of Pod "kube-controller-manager-capz-ejsbt5-control-plane-nnz8j"
STEP: Collecting events for Pod kube-system/kube-scheduler-capz-ejsbt5-control-plane-nnz8j
STEP: Collecting events for Pod kube-system/coredns-6d4b75cb6d-cp7j9
STEP: failed to find events of Pod "kube-scheduler-capz-ejsbt5-control-plane-nnz8j"
STEP: Creating log watcher for controller kube-system/kube-proxy-lkkdx, container kube-proxy
STEP: Creating log watcher for controller kube-system/calico-node-nv5n6, container calico-node
STEP: Collecting events for Pod kube-system/metrics-server-7d674f87b8-g9r6p
STEP: Creating log watcher for controller kube-system/calico-node-sj2ht, container calico-node
STEP: Creating log watcher for controller kube-system/kube-scheduler-capz-ejsbt5-control-plane-nnz8j, container kube-scheduler
STEP: Creating log watcher for controller kube-system/kube-apiserver-capz-ejsbt5-control-plane-nnz8j, container kube-apiserver
... skipping 20 lines ...